If you’re so smart why don’t you come up with a way to do it under 100 watts???
Also this is training them not using them. Using an ai consumes significantly less power than the process to train it sort of like how humans take more to learn than to put something in practice.
Yeap. They are trying to train the AI quickly, not over the course of 18 years.
Also, it’s early still early for these AI/LLM tools. The first few iterations of things are generally not very efficient. After you can prove it works, THEN you can make improvements to it, make it more efficient, etc.
However, I think that the approach of feeding in more power instead of optimizing it is the wrong approach. I feel Microsoft could get further ahead of it could find ways to train models more efficiently 🤷
I think his point is that the person he responded to is proposing well meaning feeling based policies without having any real knowledge of any of the negative impacts his policy would have.
Organic technology is hard. If you can figure out how to grow a compute system you will take human technology hundreds of years into the future. Silicon tech is the stone age of compute.
The brain has a slow clock rate to keep within its power limitations, but it is a parallel computational beast compared to current models.
It takes around ten years for new hardware to really take shape in our current age. AI hasn’t really established what direction it is going in yet. The open source offline model is the likely winner, meaning the hardware design and scaling factors are still unknown. We probably won’t see a good solution for years. We are patching video hardware as a solution until AI specific hardware is readily available.
I bet it’d be a whole lot easier to grow and organic computer if you didn’t have to worry about pesky things like people thinking you grew genetically engineered slaves.
The whole language model scene system started with “we accidently found something that kinda works” and is now in full “somebody please accidently find a way so it uses less power” mode.
Because safety and profits aren’t going in the same direction. They would cut corner for reduce the costs. Which is how you end with a nuclear accident. And then it would be to the tax payer to kick the bill.
Microsoft is big enough that government would force them to pay up. There is just too much public pressure for that kind of disaster to get waved away.
Also, there are nuclear options that are far safer than water-based reactors. WCRs are literally the worst possible design for a nuclear reactor, and we were stupid enough to choose that over dry material reactors in the 60s.
Microsoft is big enough that government would force them to pay up.
Lmao
Just like they made the banks pay up? Like how they make oil companies pay up?
Right now it’s commonplace for oil rigs and nuclear plants to be decommissioned on the taxpayer, sometimes entirely funded by them even, rather than by the company.
I would take longer, but I bet I could do it. All ChatGPT does it’s basically regurgitate StackOverflow answers at you, and I’ve been doing that for years in my professional life
All ChatGPT does it’s basically regurgitate StackOverflow answers at you
All it does, eh? It doesn’t do anything else? I don’t think you’ve ever used ChatGPT. It has more knowledge and capabilities than any single human. Yes, a single human can have more expertise in one or bunch of the topics, but no human brain can have as much general knowledge as ChatGPT. And unlike us, it can improve, there’s no limit.
Mega corporations should not be allowed to use nuclear power plants purely for themselves.
Also, if you need that much power to do something bthat a human brain does with under 100 watts, I really think you’re doing it wrong
If you’re so smart why don’t you come up with a way to do it under 100 watts???
Also this is training them not using them. Using an ai consumes significantly less power than the process to train it sort of like how humans take more to learn than to put something in practice.
Yeap. They are trying to train the AI quickly, not over the course of 18 years.
Also, it’s early still early for these AI/LLM tools. The first few iterations of things are generally not very efficient. After you can prove it works, THEN you can make improvements to it, make it more efficient, etc.
However, I think that the approach of feeding in more power instead of optimizing it is the wrong approach. I feel Microsoft could get further ahead of it could find ways to train models more efficiently 🤷
People tend to forgot the millions of years of horribly inefficient evolution it took to develop the human brain.
Don’t think the point that’s being made is “smarturr” but rather that stay within the margins of available power.
I think his point is that the person he responded to is proposing well meaning feeling based policies without having any real knowledge of any of the negative impacts his policy would have.
I feel like I’m in that butterfly meme, going like is this what reasonable conversation is like.
<3
Organic technology is hard. If you can figure out how to grow a compute system you will take human technology hundreds of years into the future. Silicon tech is the stone age of compute.
The brain has a slow clock rate to keep within its power limitations, but it is a parallel computational beast compared to current models.
It takes around ten years for new hardware to really take shape in our current age. AI hasn’t really established what direction it is going in yet. The open source offline model is the likely winner, meaning the hardware design and scaling factors are still unknown. We probably won’t see a good solution for years. We are patching video hardware as a solution until AI specific hardware is readily available.
I bet it’d be a whole lot easier to grow and organic computer if you didn’t have to worry about pesky things like people thinking you grew genetically engineered slaves.
The whole language model scene system started with “we accidently found something that kinda works” and is now in full “somebody please accidently find a way so it uses less power” mode.
Why should they not be allowed? Nuclear power plants are great options and will mean less demand on worse energy providing sources
Because safety and profits aren’t going in the same direction. They would cut corner for reduce the costs. Which is how you end with a nuclear accident. And then it would be to the tax payer to kick the bill.
SMRs are pretty safe. That’s not the issue. It’s that they’re thinking about using a whole fucking nuclear reactor to train AI to sell you shit.
Microsoft is big enough that government would force them to pay up. There is just too much public pressure for that kind of disaster to get waved away.
Also, there are nuclear options that are far safer than water-based reactors. WCRs are literally the worst possible design for a nuclear reactor, and we were stupid enough to choose that over dry material reactors in the 60s.
Lmao
Just like they made the banks pay up? Like how they make oil companies pay up?
Right now it’s commonplace for oil rigs and nuclear plants to be decommissioned on the taxpayer, sometimes entirely funded by them even, rather than by the company.
Show me at least 1 person that is capable of doing everything that something like ChatGPT can do.
I would take longer, but I bet I could do it. All ChatGPT does it’s basically regurgitate StackOverflow answers at you, and I’ve been doing that for years in my professional life
All it does, eh? It doesn’t do anything else? I don’t think you’ve ever used ChatGPT. It has more knowledge and capabilities than any single human. Yes, a single human can have more expertise in one or bunch of the topics, but no human brain can have as much general knowledge as ChatGPT. And unlike us, it can improve, there’s no limit.
Not a terrible argument. How do we limit compute vs. output?