It’s very easy with an incremental improvement tactic to get stuck in a local maximum. You’ve then hit a dead end, every available option leads to a degredation and thus isn’t viable. It isn’t a sure thing incremental improvements lead to the desired outcome.
I simply cannot imagine a situation where we reach a local maximum and get stuck in it for the rest of human history. There’s always someone else trying a new approach. We will not stop trying to improve our technology. Even just simply knowing what doesn’t work is a step in the right direction.
We already know that General Intelligence is possible. The question that remains is wether it can be replicated artificially.
I can imagine it really easily for the foreseeable future, all that would need to happen is for the big corporations and well funded researchers to stick to optimizing LLMs and for that to be a dead end.
Yeah that’s not the rest of human history (unless the rest of it isn’t very much) but enough to make concerns about AGI into someone else’s problem.
In this scenario reaching the goal would require an entirely different base technology, and incremental improvements to what we have now do not eventually lead to AGI.
Kinda like incremental improvements to cars or even trains won’t eventually get us to Mars.
Firstly, I’ve been talking about improvements in AI technology broadly, not any specific subfield. Secondly, you can’t know that. While I doubt LLMs will directly lead to AGI, I wouldn’t claim this with absolute certainty - there’s always a chance they do, or at the very least, that they help us discover what the next step should be.
It’s true that I can’t know for sure that they won’t lead to AGI (or like you say give clues) - however it’s definitely a scenario I can imagine, and that’s what I was responding to: The idea that incremental improvements Must lead to a given goal. I don’t think that’s the case.
Here in particular I think it’s not only possible that it won’t, it’s even somewhat likely.
This doesn’t just apply to AGI, same could be said about any technology. If it can be created and there’s value in creating it, then it’ll just be a matter of time untill someone invents it unless we go extinct before that.
By saying this aren’t you assuming that human civilization will last long enough to get there?
Look at the timeline of other species on this planet. Vast numbers of them are long extinct. They never evolved intelligence to our level. Only we did. Yet we know our intelligence is quite limited.
What took biology billions of years we’re attempting to do in a few generations (the project for AI began in the 1950s). Meanwhile the amount of non-renewable energy resources we’re consuming has hit exponential takeoff. Our political systems are straining and stretching to the breaking point.
And of course progress towards AI has not been steady with the project. There was an initial burst of success in the ‘50s followed by a long AI winter when researchers got stuck in a local maximum. It’s not at all clear to me that we haven’t entered a new local maximum with LLMs.
Do we even have a few more generations left to work on this?
I’m talking about AI development broadly, not just LLMs.
I also listed human extinction as one of the two possible scenarios in which we never reach AGI, the other being that there’s something unique about biological brains that cannot be replicated artificially.
We could witness a collapse of our high tech civilization that effectively ends AI research without necessarily leading to extinction. Think of a global warming supercharged Mad Max post-apocalyptic future. People still survive but the population has crashed and there’s a lot of fighting for survival and scavenging among the ruins of civilization.
There’s gotta be countless other variations on this theme. Global dystopian techno-feudalism perhaps?
I don’t think there’s any guarantee that civilization would rebound. Fossil fuels were a one-shot deal in the geological history of the planet. For all of our efforts to build a sustainable future with renewable energy, fossil fuels remain critical for a lot of non-energy uses: food production (fertilizers), plastics, steel, and even cements for construction.
Another major issue is critical minerals for building renewable energy infrastructure. These minerals are being mined at an incredible rate, processed and turned into technology (think circuit boards full of components), aging out, and ending up as e-waste. Unfortunately our e-waste recycling infrastructure is a total nightmare involving the shipping of this stuff across the ocean to 3rd world countries where it gets picked over, scavenged for valuables, and the rest turned into toxic landfill.
All of that technology lifecycle creates huge amounts of toxic pollution and consumes huge amounts of fossil fuels (in particular for the mining, processing, and shipping). So in fact without fossil fuels we don’t even know how to build any technology, let alone renewable energy.
What do you mean there’s no evidence? This seems like a difference of personal explanation of what AGI is where you can move the goal post as much as you want “it’s not really AGI until it can ___, ok just because it can do that doesn’t mean it’s AGI, AGI needs to be able to do _____”.
It is a common misconception that incremental improvements must equate to eventually achieving the goal, but it is perfectly possible that progress could be asymptotic and we never reach AGI even with constant “advancements”
Incremental improvements by definition mean that you’re moving towards something. It might take a long time but my comment made no claims about the timescale. There’s only two plausible scenarios that I can think of in which we don’t reach AGI and they’re mentioned in my comment.
The difference here is that you’re never going to reach New Zealand that way but incremental improvements in AI will eventually get you to AGI*
*Unless intelligence is substrate dependent and cannot be replicated in silica or that we destroy ourselves before we get there
It’s very easy with an incremental improvement tactic to get stuck in a local maximum. You’ve then hit a dead end, every available option leads to a degredation and thus isn’t viable. It isn’t a sure thing incremental improvements lead to the desired outcome.
I simply cannot imagine a situation where we reach a local maximum and get stuck in it for the rest of human history. There’s always someone else trying a new approach. We will not stop trying to improve our technology. Even just simply knowing what doesn’t work is a step in the right direction.
We already know that General Intelligence is possible. The question that remains is wether it can be replicated artificially.
I can imagine it really easily for the foreseeable future, all that would need to happen is for the big corporations and well funded researchers to stick to optimizing LLMs and for that to be a dead end.
Yeah that’s not the rest of human history (unless the rest of it isn’t very much) but enough to make concerns about AGI into someone else’s problem.
(Edit, clarified)
Like I said; I’ve made no claims about the timeline. All I’ve said is that incremental improvements will lead to us getting there eventually.
In this scenario reaching the goal would require an entirely different base technology, and incremental improvements to what we have now do not eventually lead to AGI.
Kinda like incremental improvements to cars or even trains won’t eventually get us to Mars.
Firstly, I’ve been talking about improvements in AI technology broadly, not any specific subfield. Secondly, you can’t know that. While I doubt LLMs will directly lead to AGI, I wouldn’t claim this with absolute certainty - there’s always a chance they do, or at the very least, that they help us discover what the next step should be.
It’s true that I can’t know for sure that they won’t lead to AGI (or like you say give clues) - however it’s definitely a scenario I can imagine, and that’s what I was responding to: The idea that incremental improvements Must lead to a given goal. I don’t think that’s the case. Here in particular I think it’s not only possible that it won’t, it’s even somewhat likely.
This doesn’t just apply to AGI, same could be said about any technology. If it can be created and there’s value in creating it, then it’ll just be a matter of time untill someone invents it unless we go extinct before that.
Just like incremental improvements in the bicycle will eventually allow for hypersonic peddling.
By saying this aren’t you assuming that human civilization will last long enough to get there?
Look at the timeline of other species on this planet. Vast numbers of them are long extinct. They never evolved intelligence to our level. Only we did. Yet we know our intelligence is quite limited.
What took biology billions of years we’re attempting to do in a few generations (the project for AI began in the 1950s). Meanwhile the amount of non-renewable energy resources we’re consuming has hit exponential takeoff. Our political systems are straining and stretching to the breaking point.
And of course progress towards AI has not been steady with the project. There was an initial burst of success in the ‘50s followed by a long AI winter when researchers got stuck in a local maximum. It’s not at all clear to me that we haven’t entered a new local maximum with LLMs.
Do we even have a few more generations left to work on this?
I’m talking about AI development broadly, not just LLMs.
I also listed human extinction as one of the two possible scenarios in which we never reach AGI, the other being that there’s something unique about biological brains that cannot be replicated artificially.
We could witness a collapse of our high tech civilization that effectively ends AI research without necessarily leading to extinction. Think of a global warming supercharged Mad Max post-apocalyptic future. People still survive but the population has crashed and there’s a lot of fighting for survival and scavenging among the ruins of civilization.
There’s gotta be countless other variations on this theme. Global dystopian techno-feudalism perhaps?
Sure, but that’s still just a speedbump. In a few hundred or thousand years the civilization would rebound and we’d continue from where we left.
I don’t think there’s any guarantee that civilization would rebound. Fossil fuels were a one-shot deal in the geological history of the planet. For all of our efforts to build a sustainable future with renewable energy, fossil fuels remain critical for a lot of non-energy uses: food production (fertilizers), plastics, steel, and even cements for construction.
Another major issue is critical minerals for building renewable energy infrastructure. These minerals are being mined at an incredible rate, processed and turned into technology (think circuit boards full of components), aging out, and ending up as e-waste. Unfortunately our e-waste recycling infrastructure is a total nightmare involving the shipping of this stuff across the ocean to 3rd world countries where it gets picked over, scavenged for valuables, and the rest turned into toxic landfill.
All of that technology lifecycle creates huge amounts of toxic pollution and consumes huge amounts of fossil fuels (in particular for the mining, processing, and shipping). So in fact without fossil fuels we don’t even know how to build any technology, let alone renewable energy.
That assumes that whatever we have now is a precursor to AGI. There’s no evidence of that.
What do you mean there’s no evidence? This seems like a difference of personal explanation of what AGI is where you can move the goal post as much as you want “it’s not really AGI until it can ___, ok just because it can do that doesn’t mean it’s AGI, AGI needs to be able to do _____”.
No, it doesn’t assume that at all. This statement would’ve been true even before electricity was invented and AI was just an idea.
AI in general yes. LLMs in particular, I very much doubt it.
Yeah not with LLMs though.
You can’t know that.
It is a common misconception that incremental improvements must equate to eventually achieving the goal, but it is perfectly possible that progress could be asymptotic and we never reach AGI even with constant “advancements”
Incremental improvements by definition mean that you’re moving towards something. It might take a long time but my comment made no claims about the timescale. There’s only two plausible scenarios that I can think of in which we don’t reach AGI and they’re mentioned in my comment.
That relies on the increments being the same. It’s much easier to accelerate from 0 to 60 mph than it is from 670,999,940 mph to C.