Is AGI even possible?

Hi everyone, I’d love to open up a discussion around AGI.

Let’s assume AGI is defined as an artificial intelligence system capable of performing nearly any cognitive task as well as—or better than—a human.

Context:

Certain companies have publicly stated that their goal is to develop AGI. In response, many people online (and in the broader AI space) have dismissed these claims as marketing hype or buzzwords.

That framing bothers me a bit. People said similar things to the Wright brothers—that human flight was impossible—until it wasn’t. Humanity has consistently proven that technological progress, especially when driven by urgency and resources, can cross thresholds we once believed were unreachable.

Of course, I understand that AGI—by definition—would require breakthroughs across many domains: perception, reasoning, memory, learning, ethics, and even value alignment. So skepticism is fair. But dismissing the entire pursuit as hype seems premature.

Discussion question:

Why do so many in the AI community view talk of AGI as marketing hype, rather than taking these companies at their word and at least entertaining the possibility that it might be achieved—especially considering that many past technological milestones were also once deemed “impossible”?

2 Likes

Skepticism turns into belief in the presence of evidence.

1 Like

Of course there is a lot of discussion in this space and you’d need to evaluate each individual statement to understand the full context of the point being made in each case, but the statements I’ve seen recently that are dismissive about marketing hype are referring to statements by companies saying that AGI will happen essentially any minute now and be based on some tweaks to the current type of LLMs that have been released. Some of the companies make statements seeming to claim that all they need is a bit more training data and a bit more compute and “Voila!” But they’ve been saying that for a while now and there are some experts in the field who make the point that LLMs as they currently stand simply aren’t capable of AGI. Even the latest models released have not solved the “confabulation” or “hallucination” problem. One industry expert who has been talking about these issues for a long time and who is skeptical of the potential of LLMs to produce AGI is Gary Marcus. Have a look at his Substack and his 2019 book (with co-author Ernest Davis) Rebooting AI: Building Artificial Intelligence We Can Trust. My interpretation of his overall message is not that AGI is impossible, but that the technology we currently have will not be sufficient to achieve that.

But the State of the Art never holds still, of course. Have you taken a look at the AlphaGeometry work published recently from Google DeepMind? It incorporates LLMs for some functions, but there is significant additional technology for what they call “neuro-symbolic reasoning” that they have built.

4 Likes

Thank you, Paul! This insight is golden and I’m eager to dig into the resources that you’ve provided. I remember Andrew Ng stating in one course that, usually what happens is the benchmark for what AGI is and isn’t gets lowered, but then the challenge becomes to have a universally accepted definition, which is a classic dilemma that AI as a field continues to face (based on my limited knowledge). That’s the long way of saying, “thank you”.

5 Likes

The field is huge of course and there are lots of other experts who have discussed these issues. Stuart Russell is another prolific author on the subject who has a couple of TED talks and a longer presentation he gave for the BBC last year I think it was and multiple books. There are probably dozens of other TED talks on topics related to AI. I remember watching ones by Max Tegmark, Sam Harris (not a software expert, but a neuroscientist), Eliezer Yudkowsky and more.

Our own Prof Ng has also spoken frequently on topics like this. E.g. here’s the discussion he had with Yann LeCun a year or two ago when the 6 month AI pause was suggested because of worries about AGI.

3 Likes

I basically agree with Paul. The only thing I would add is that generally if you look at people dismissing the claims of AGI goals as hype, it’s in the context of the idea that adding more compute or throwing parameters into an LLM will magically spawn AGI. There are roughly a half dozen approaches to AGI being worked on. I currently think an Agentic approach is the most achievable. AGI is not hype, even if you listen carefully to more critical voices like Yan LeCun he is skeptical of specific approaches for AGI (LLMs) but not of the concept or of the goal.

2 Likes

A more interesting question is that if AGI is possible, how do we know human are not merely AGI living in a simulated world?

There are only two logical conclusions to this dilemma, either human mind is unique, so AGI is not possible, or we are all AGI.

2 Likes

AGI already happened. I feel dumb compared to some of LLM abilities to dig information. What it lacks to be accepted by humans as evidence is intuition. Intuition is our ability to predict from learned experience, but we have emotions and other hardwired stuff that affects our attention. LLMs attention mechanism is limited to just words, but its abilities to manipulate its space is more superior than of humans typing buttons. So give it another 3 months to mature agents.

1 Like

Can you please guide me on those half a dozen methods for achieving AGI that are currently being worked on? I’m interested! I’m familiar with agents—both as a concept and a technology—from a general awareness standpoint. What are the other approaches, if you don’t mind sharing? I’m mostly aware of the strategies tech companies are using to tackle the value alignment dilemma, but I’m not as familiar with the specific methods or research paths being pursued to actually achieve AGI.

1 Like

yeesh… I love this question. I don’t have an answer of course, but yea, I think you’re right. What makes us special if we can replicate what makes us uniquely us?

This is definitely an interesting take—I’m with you on the “I’m not half as smart as an LLM” bandwagon. I know my limitations. There goes the problem though, if we had hyper intuitive agentic AI, would we consider that AGI? Who decides? etc.

About of the problem of AI & AGI hallucination:

Well… I’m not even sure is the AGI “hallucination” actually any problem if we want to imitate humans.

…since by the definition we people do it every day:
“AI hallucinations are incorrect or misleading results that AI models generate . These errors can be caused by a variety of factors, including insufficient training data, incorrect assumptions made by the model, or biases in the data used to train the model.”

Maybe making AGI to imitate human is the problem?

IMHO, the thing we must teach AGI (and humans) is skepticism.
Especially when the data in internet is full of misinformation.
And if we use customer interactions for AI training, we must remember that 50% of humans are below average intelligence. AI can’t be any better than the given data.

Besides we should remember that intelligence =/= wisdom. Actually the AI is just gathering more wisdom and via that imitates intelligence.
Like, I’m a Mensa member, but please don’t ask difficult questions from me - those might require information, which I don’t necessarily have.

  • Petri

I’m a Mensian as well. Would love to have some AI conversations over coffee preferably. Was thinking about going to that thing in Chicago in June. Looks interesting. Are you going?

Geofrey Hinton is a good place to get oriented, esp. around LLMs. Very persuasive that intelligence is what LLMs have encapsulated in a sense, essentially spontaneously. Which is why transformer scaling is an idea - that a more complex intelligence might coalesce from a sufficiently large and complex model.

“The paradigm for intelligence was logical reasoning, and the idea of what an internal representation would look like was it would be some kind of symbolic structure. That has completely changed with these big neural nets.” -Hinton

But that’s his area, there are other paths being worked on.

Here’s a list of AGI approaches with some help from ChatGPT help.
#1, 4 and 5 were ones we talked about in Grad school and I think have the best chance.

1. Transformer Scaling Laws

Description:
Large language models (LLMs) like GPT-3, GPT-4, and Claude get smarter as you increase the amount of data, parameters, and compute. This is called “scaling laws.”

Example Projects:
GPT-4, Claude, Gemini, LLaMA

Learn More:


2. Retrieval-Augmented Generation (RAG)

Description:
Combines language models with databases or external knowledge to give better, more up-to-date answers.

Example Projects:
ChatGPT with Tools, Perplexity AI, Meta Toolformer

Learn More:


3. Reinforcement Learning + LLMs (RLHF)

Description:
Uses reinforcement learning to fine-tune LLMs based on human feedback, preferences, or goals.

Example Projects:
InstructGPT, DeepMind Gato, Voyager (Minecraft Agent)

Learn More:


4. Cognitive Architectures

Description:
Tries to build AGI by modeling how humans think — including memory, planning, logic, and perception.

Example Projects:
ACT-R, Soar, OpenCog, AERA

Learn More:

  • Soar Cognitive Architecture
  • OpenCog
  • ACT-R (Carnegie Mellon)

5. Neurosymbolic AI

Description:
Combines neural networks with symbolic logic for better reasoning and explainability.

Example Projects:
IBM Neuro-Symbolic AI, AlphaCode (DeepMind)

Learn More:


6. Embodied AI (Robotics + AI)

Description:
AGI systems that learn through physical interaction with the real or simulated world.

Example Projects:
OpenAI Robotics, DeepMind Robotics, Boston Dynamics AI

Learn More:


7. Multimodal Learning

Description:
Combines text, images, video, audio, and other modalities in one unified model.

Example Projects:
GPT-4-Vision, Gemini, Meta Kosmos, Perceiver IO

Learn More:


8. Simulation-Based AGI / World Models

Description:
Trains agents in virtual environments to model and predict the world, like a mental simulation.

Example Projects:
World Models, MuZero, Voyager

Learn More:


9. Evolutionary / Emergent Approaches

Description:
Uses population-based methods or open-ended environments to evolve intelligent agents.

Example Projects:
POET (Uber AI), Open-ended Learning, NEAT

Learn More:


Where to Follow AGI Research

Resource Description Link
AI Alignment Forum Technical research on AGI alignment alignmentforum.org
LessWrong Philosophy and safety of AGI lesswrong.com
OpenAI Research State-of-the-art papers and tools openai.com/research
DeepMind Blog AGI papers and experiments deepmind.com
ArXiv AI Papers Latest research preprints arxiv.org/list/cs.AI/recent
Lex Fridman Podcast Interviews with AI leaders lexfridman.com/ai
YouTube (Karpathy, Yannic Kilcher) Practical walkthroughs and commentary Search on YouTube for their channels
1 Like

Traveling to USA even from visa free countries, like :finland: , has become so uncertain, that maybe after 4 years…

Yes, of course. Well, thanks for your input on the forum.