From a recent debate between Gary Marcus and Grady Booch on AGI timelines:
Marcus:
I get that AGI is hard, and that we aren’t there yet. I think we are wasting funding and bright young minds on an approach that probably isn’t on the right path… But I am cautiously optimistic that we’ll do better in the next 75 [years], that once the hype cools off, people will finally dive deeper into neurosymbolic AI, and start to take some important steps. Our data problems are solved, our compute problems are mostly solved; it’s now mostly a matter of software, and of rethinking how we build AI. Why be so sure we can’t do that in the next 75 years?
Booch:
You posit that we will see AGI within a few decades, I think it is more like a few generations… With every step we move forward, we discover things we did not know we needed to know. It took evolution about 300 million years to move from the first organic neurons to where we are today, and I don’t think we can compress the remaining software problems associated with AGI in the next few decades.
Marcus:
In my darkest moments, I actually agree with you. For one thing, most of the money right now is not going to the wrong place: it’s mostly going to large language models, and for you, like for me, that just seems like an approximation to intelligence, not the real thing… But I see some signs that are promising. The neurosymbolic AI community is growing fast; conferences that used to be dozens are now thousands… I take that as a hopeful sign that the scaling-über-alas narrative is losing force, and that more and more people are open to new things.
[…]
The rubber-that-meets-the-road question in the end is how many key discoveries do still we need to make, and how long do we need to make them?
Booch:
Intelligence is, for me, just the first phase in a spectrum that collectively we might speak of as synthetic sentience. Intelligence, I think, encompasses reasoning and learning. Indeed, I think in the next few decades, we will see astonishing progress in how we can build software-intensive systems that attend to inductive, deductive, and abductive reasoning.
[…]
Consciousness and self-consciousness are the next phases in my spectrum. I suspect we’ll see some breakthroughs in ways to represent long term and short term memory, in our ability to represent theories of the world, theories of others, and theories of the self.
[…]
Sentience and then sapience fill out this spectrum. The world of AI has not made a lot of progress in the past several years, nor do I see much attention being spent here… Work needs to be done in the area of planning, decision making, goals and agency, and action selection. We also need to make considerable progress in metacognition and mechanisms for subjective experience.
[…]
These things, collectively, define what I’d call a synthetic mind. In the next decade, we will likely make interesting progress in all those parts I mentioned. But, we still don’t know how to architect these parts into a whole… This is not a problem of scale; this is not a problem of hardware. This is a problem of architecture.