Geofrey Hinton is a good place to get oriented, esp. around LLMs. Very persuasive that intelligence is what LLMs have encapsulated in a sense, essentially spontaneously. Which is why transformer scaling is an idea - that a more complex intelligence might coalesce from a sufficiently large and complex model.
“The paradigm for intelligence was logical reasoning, and the idea of what an internal representation would look like was it would be some kind of symbolic structure. That has completely changed with these big neural nets.” -Hinton
But that’s his area, there are other paths being worked on.
Here’s a list of AGI approaches with some help from ChatGPT help.
#1, 4 and 5 were ones we talked about in Grad school and I think have the best chance.
1. Transformer Scaling Laws
Description:
Large language models (LLMs) like GPT-3, GPT-4, and Claude get smarter as you increase the amount of data, parameters, and compute. This is called “scaling laws.”
Example Projects:
GPT-4, Claude, Gemini, LLaMA
Learn More:
2. Retrieval-Augmented Generation (RAG)
Description:
Combines language models with databases or external knowledge to give better, more up-to-date answers.
Example Projects:
ChatGPT with Tools, Perplexity AI, Meta Toolformer
Learn More:
3. Reinforcement Learning + LLMs (RLHF)
Description:
Uses reinforcement learning to fine-tune LLMs based on human feedback, preferences, or goals.
Example Projects:
InstructGPT, DeepMind Gato, Voyager (Minecraft Agent)
Learn More:
4. Cognitive Architectures
Description:
Tries to build AGI by modeling how humans think — including memory, planning, logic, and perception.
Example Projects:
ACT-R, Soar, OpenCog, AERA
Learn More:
- Soar Cognitive Architecture
- OpenCog
- ACT-R (Carnegie Mellon)
5. Neurosymbolic AI
Description:
Combines neural networks with symbolic logic for better reasoning and explainability.
Example Projects:
IBM Neuro-Symbolic AI, AlphaCode (DeepMind)
Learn More:
6. Embodied AI (Robotics + AI)
Description:
AGI systems that learn through physical interaction with the real or simulated world.
Example Projects:
OpenAI Robotics, DeepMind Robotics, Boston Dynamics AI
Learn More:
7. Multimodal Learning
Description:
Combines text, images, video, audio, and other modalities in one unified model.
Example Projects:
GPT-4-Vision, Gemini, Meta Kosmos, Perceiver IO
Learn More:
8. Simulation-Based AGI / World Models
Description:
Trains agents in virtual environments to model and predict the world, like a mental simulation.
Example Projects:
World Models, MuZero, Voyager
Learn More:
9. Evolutionary / Emergent Approaches
Description:
Uses population-based methods or open-ended environments to evolve intelligent agents.
Example Projects:
POET (Uber AI), Open-ended Learning, NEAT
Learn More:
Where to Follow AGI Research