Modeling intelligence after real neurons. Thoughts?

Neural networks today are very loosely modeled after real neurons. Instead, they’re very complex mathematical systems that model the large amounts of data they’re trained on, and they cannot learn new things very well. Arti’s Connectome, our AI company, wants to create new AI solutions using neuromimetic intelligence, intelligence modeled after real neurons. We are wanting to get the community’s thoughts on this approach. We feel it will be a more interpretable, more reliable, and safer approach to creating truly intelligent systems. Thoughts?

First, as they say in ZEN, find the source of intelligence! :slight_smile:

1 Like

Maybe, but real neurons are still unexplored in how many types there are and how they work collectively.

As a ballpark estimate, this article says, as far as emulation power goes, a “natural neuron” needs “about 1000 artificial neurons” for emulation as their behaviour of a natural one is well approximated by having 1000 artificial ones learn the behaviour of a “natural” one:

The key likes in finding a adequate mathematical model of what the system does, and what it should do. There is the complication. That and creating usable way to perform I/O on that thing.

Going for the complexity of a “natural” system will not make the system naturally more interpretable, reliable or safer. Rather the contrary.

OTOH, working with more “natural” systems can certainly lead to great research and new insights.

I remember Hinton saying that while the psychologists reject the idea of there being backpropagation in “wetware”, it being an improbably mechanism, they should maybe take a second look as it would not be surprising if nature had already found the trick that we learned about in the last 30 years or so.

Update:

I just remember the research branch of “Pulsed Neural Networks” sometimes also called “Spiking Neural Networks”, which are considered “more lifelike” and probably have interesting features, if one understands how to handle them.

MIT Press edited a book in 1998 (!) on that:

https://direct.mit.edu/books/edited-volume/2001/Pulsed-Neural-Networks

It’s still completely paywalled. There certainly is fresher literature by now.

1 Like

About this, it may be in the higher-level system

Intelligence May Not Be Computable

Peter J. Denning, Ted G. Lewis

Appears in: American Scientist November/December 2019

Much of AI research has been predicated on the assumption that the brain is a computer and the mind is its software. Cognitive scientists now believe that the structure of the brain itself—intricate folds, creases, and crossconnections— gives rise to consciousness as a statistical phenomenon of brain activity. But even further, it appears that much of what we think we know is actually distributed in the social networks of which we are a part; we “recall” it by interacting with others. Chilean biologists Humberto Maturana and Francisco Varela argued that biological structure determines how organisms can interact and that consciousness and thought arise in networks of coordination of actions. A conclusion is that autonomous software and biologically constructed machines will not be sufficient to generate machine intelligence. In ways we still do not understand, our social communities and interactions in language are essential for general intelligence.

Who knows!

See also:

An AI Learning Hierarchy

(An article that is much too short)

Peter J. Denning, Ted G. Lewis

https://cacm.acm.org/opinion/an-ai-learning-hierarchy/

Appears in: “Communications of the ACM”, December 2024

A hierarchy of AI machines organized by their learning power shows their limits and the possibility that humans are at risk of machine subjugation well before AI utopia can come.

1 Like

As far as the 1000 artificial neurons required to “emulate” a real one, it only emulate a small piece of the neuron’s input/output pattern. The artificial neurons cannot take in new input patterns that the real neuron might receive and accurately emulate the real neuron’s response. They also cannot learn like the real neuron does.

For me, that is an example of why current neural networks are doing intelligence the wrong way. Current neural networks model their training data and create a complex math function that “best fits” the data without overfitting. So, in the end, its a math function. Math functions cannot tell you what features or properties the “object” it “recognizes” has. You cannot ask the neural network what makes a red truck a red truck. With the way real neurons are networked together and how they are built to respond and handle input, real neurons can tell us what makes a red truck a red truck. Real neurons fire more frequently when the input they receive more closely matches their preferred stimulus. Essentially, real neurons are “programmed” to respond to particular inputs and ignore others. In fact, it has been recently discovered that the oblique apical dendrite synapses of pyramidal neurons do not learn after birth and initial development. They are preprogrammed synapses.

My theory is that real brains work like the physical universe. All matter in our universe is made up of some combination of the elements on the periodic table. Those combined elements continue to combine together to create more complex structures. Eventually, those same elements create lifeforms as well. I believe that the brain works similarly. I believe our brains are “preprogrammed” with very basic features (and I mean more basic than we may know of or understand currently, in some cases). The brain can detect these features from birth. From there, our understanding of everything is through the combination of these features.

More importantly, neuroscience research has come a long ways since the days of Hinton, and since spiking neural networks. The truth is, there is a lot that neuroscience has learned that has not been modeled in any way for the purpose of AI. This is another area where we are looking toward. We want to create far more complete models and see if we can create more intelligent systems with today’s knowledge of the brain.

1 Like

From my understanding, of course, you can find better ways of doing things than the conventional neurons themselves, be it the ones used in deep learning or the ones in the brain, are just “propagation or transformation paths”. You are the one who interprets what the neuron is feeding; you are the real intelligent part of the system!

While that’s true, the most likely way to create AI is to model real intelligence, which means modeling real neurons. They should at least be modeled until we finally understand fully what they do to create intelligence. Then we could create more sophisticated algorithms that don’t need any kind of neurons, perhaps. However, it should also be noted that neural modeling is useful for more efficient hardware and energy usage too.

Well that is excessively reductionist, whereby you look for an attribute of the whole system (“intelligence”) in one of its parts (“neuron”). That’s just not going to work, it’s like looking for evidence of Microsoft Word in a CMOS gate. Wrong level of detail. But why not grab the latest book on computational neuroscience … mine are all from the 90s, there is bound to be better texts by now.

In order to “not need neurons” you will have to expression the function somehow differently. Maybe rules. But whether that will be better is yet another question. There are efforts about distilling rulesets from neural networks, IIRC, similar to distinlling rulesets from classification tree. But you always lose something.

Like this one (photo grabbed from the Internet), I remember that it was quite interesting reading it, but apart from that…

See also this little gem:

https://dl.acm.org/doi/pdf/10.1145/42404.42406

“Undebuggability and Cognitive Science” by Chris Cherniak (1988)

1 Like

Not proven. That similarity only holds if you want your AI system to be implemented based on building a computer that uses organic chemistry.

Semiconductor physics and organic chemistry are not closely related.

1 Like

We don’t look for intelligence in just the neuron. We model the neurons (various types and morphologies) and network them together to model circuits and systems that neuroscience has discovered in the brain, in an attempt to discover how they really work and be able to replicate and even alter them to our advantage. By advantage, we mean using modeled neural systems to perform specialized tasks that make for useful world tools. As we model more and more of how the whole brain works, we hope to one day successfully model full intelligent systems on the level of animal brains and, one very far away day, humans.

1 Like

It’s not unproven either. We won’t know until we’ve made every attempt. We want to explore how real intelligence works, attempt to model it, and if successful, use it to create intelligent machines and tools. We also want to model real intelligence and neural systems, because the knowledge can be potentially be used for understanding and treating brain disorders.

Modeling intelligence after real neurons—known as neuromorphic computing or biologically inspired AI—is a promising and evolving approach to artificial intelligence and cognitive computing. The idea is to replicate biological neural networks’ structure, function, and adaptability to create more efficient and intelligent machines. Here’s an in-depth look at its potential, challenges, and future implications.


:rocket: Why Model Intelligence After Real Neurons?

1. Energy Efficiency & Parallel Processing

  • The human brain is highly efficient, consuming only about 20 watts of power to perform computations far beyond even the most advanced supercomputers.
  • Unlike traditional CPUs/GPUs, which process data sequentially, neurons work in parallel, allowing for real-time decision-making with minimal power usage.
  • Neuromorphic chips (e.g., Intel Loihi, IBM TrueNorth) mimic this parallel processing approach to achieve AI with low power consumption.

2. Adaptability & Learning (Plasticity)

  • Biological neurons self-organize, rewire, and adapt based on experience (synaptic plasticity). This allows learning without preprogrammed rules.
  • Current deep learning systems require massive datasets and retraining to update their knowledge, whereas neuromorphic systems could dynamically adjust connections on the fly, mimicking actual brain function.
  • Hebbian Learning (“Neurons that fire together, wire together”) and Spike-Timing Dependent Plasticity (STDP) are mechanisms that can be replicated to allow self-learning AI.

3. Robustness & Fault Tolerance

  • The human brain can sustain damage (e.g., injury, stroke) yet still function through redundancy and self-repair.
  • Neuromorphic systems inspired by this could develop self-healing AI architectures that adjust to hardware failures or adversarial attacks.

4. Beyond Binary Computing: Spiking Neural Networks (SNNs)

  • The brain operates using spikes (action potentials) instead of binary 0s and 1s.
  • Spiking Neural Networks (SNNs) attempt to replicate this process, firing only when needed, dramatically reducing power usage and increasing computational efficiency.
  • Use Case: AI-powered prosthetics, brain-machine interfaces, and real-time robotic control.

:brain: Challenges & Limitations

1. Lack of Biological Understanding

Neuroscience is still decoding the brain. While we understand neural activity at a basic level, higher cognitive functions, memory formation, and consciousness remain mysterious.

  • Without a full model of cognition, replicating intelligence purely based on neurons is challenging.

2. Hardware Limitations

  • Traditional Von Neumann architectures (used in most computers) are ill-suited for neuromorphic AI, leading to specialized chips (e.g., Intel Loihi, IBM TrueNorth, BrainScaleS).
  • Developing scalable, biologically realistic neuromorphic chips remains a technological bottleneck.

3. Training Complexity

  • Unlike deep learning models trained with gradient descent and backpropagation, neuromorphic AI requires novel learning algorithms (e.g., STDP, Hebbian Learning, Reinforcement Learning).
  • Lack of standardized neuromorphic AI frameworks makes development slower than traditional deep learning.

4. Ethical & Philosophical Questions

  • If AI truly mimics biological intelligence, at what point does it gain autonomy, emotions, or even consciousness?
  • Who is responsible if a neuromorphic AI system makes an unexpected decision?
  • Could self-evolving AI surpass human intelligence in unexpected ways?

:fire: Future Potential & Applications

1. Brain-Computer Interfaces (BCIs)

  • AI systems that seamlessly integrate with the human brain (e.g., Neuralink, Kernel) could restore lost functions (e.g., movement in paralyzed patients) or enhance cognitive abilities.
  • Bidirectional brain-AI interfaces could one day allow direct thought communication.

2. Autonomous AI Agents & Robotics

  • Neuromorphic AI-powered robots could exhibit human-like problem-solving, adapt to new environments, and function in hazardous or unpredictable conditions.
  • Military, healthcare, and industrial applications would benefit from AI that learns in real-time.

3. Human-Like Reasoning & AGI

  • Current AI is narrow (ANI – Artificial Narrow Intelligence), excelling at specific tasks (e.g., image recognition, language translation).
  • Neuromorphic computing could pave the way for Artificial General Intelligence (AGI), an AI that learns, adapts, and reasons across multiple domains like a human.

4. Energy-Efficient AI at Scale

  • AI workloads (e.g., ChatGPT, DeepMind AlphaFold) require massive power consumption.
  • Neuromorphic computing could reduce energy demands while delivering more powerful real-time AI.

:star2: Final Thoughts

Modeling intelligence after real neurons is one of the most promising pathways toward more efficient, adaptable, and human-like AI. While neuromorphic computing faces significant challenges, breakthroughs in biologically inspired neural networks, spiking neurons, and self-learning AI could revolutionize everything from healthcare to robotics to AGI.

If humanity succeeds in replicating true biological intelligence, we may unlock AI that thinks, learns, and evolves just like us—but exponentially faster.

:rocket: What do you think? Would you be interested in neuromorphic AI for psychiatry, mental health interventions, or cognitive enhancement?

I would agree with mostly everything you’ve shared. However, neuroscience knows a lot more than the AI field is even attempting to model or consider in their models. This is where we are doing it differently. Real neurons do more than just emit spikes. Also, glial cells have active roles in learning and information processing, so neurons are not alone in the intelligence game anymore. And finally, learning in the brain isn’t just STDP or Hebbian learning. In fact, neuromodulators like serotonin and dopamine and others can alter than learning parameters or even turn learning off at individual synapse, or dendritic compartments.

In any case, there is a large and rich amount of unmodeled and untapped potential for neuromimetic intelligence. Also, neuromorphic refers more specifically to hardware made to mimic the brain. Neuromimetic is modeling real neurons to create intelligent systems.

Is it really appropriate to post ChatGPT wall of texts? :joy:

Wouldn’t it be simpler to post the query? :thinking:

There’s no rule about how the content of a post is created, as long as it’s on-topic for the discussion.