How does a human perform vs LLM

How is it possible that we humans perform such cognitive tasks at such a young age without the speed or computing power of “row of Nvidia GPUs” and gargantuan amounts of memory?
Is it because the human mind approximates more, that it mixes other inputs such as visual and auditive (intonation, exclamation,…)?

Do you think that human brains have no memory and computing power that is comparable to the modern CPU’s?

Brain-Inspired Computing Can Help Us Create Faster, More Energy-Efficient Devices — If We Win the Race.

I am not a neuroscientist, but my understanding is that the number of neural connections in a human brain is still much bigger than the complexity of today’s most powerful LLM. I’m not sure that neuroscientists would say that they have a complete explanation yet for how human intelligence is actually created, represented in the brain and actualized by the biological structures of the brain.

I just did a bit of quick googling and found this article from NIH. Here’s a quote from the first page:

  • There are approximately 100 billion neurons in a mature human brain.[3] The naturally occurring neuronal cell death occurs prenatally, and elimination of about 50% of unwarranted connections among neurons occurs postnatally. Each neuron can make connections with more than 1000 other neurons, thus an adult brain has approximately 60 trillion neuronal connections.

This reference claims that the number of parameters in GPT-4 is 1.76 trillion.

Wow thank you both for your insights.

I never thought of approximating our brain neurons to the transformer model with an decoder / encoder functionality:

a nerve cell accumulates electrical activity from its neighbouring nerve cells based on their “importance” and outputs electrical activity based on its aggregated input

Could we think of a neuron as token and the electrical charges as statistical probabilities?

We could even think that the lack of precision we exhibit is tied to the efficiency gains pertaining to such smaller constructs.

Continuing to extrapolate from this article however we do deviate from LLMs. Speed in the human brain is vital and a differentiating factor in results, and multiple sensory inputs also appear to be handled partially differently.

This is all truly an eye opener, for someone like me anyway.

It’s great to see that you read that NIH article more deeply than I did! :nerd_face: I was just looking for that one number of the approximate number of neural connections in the brain, just as a rough benchmark to compare against the complexity of the LLM models. Prof Ng comments early in DLS Course 1 about the history of Neural Networks in Computer Science. The ideas were originally formulated in the 1950s as a digital model of how the neural architecture of the brain works and, of course, the model is only approximate. Of course back in the early days of the ideas, they didn’t have the compute power to solve any but the most constrained problems. It is also my understanding just from reading a few books and articles as a layman, that neuroscience continues to advance in their understanding of the structure and functioning of the brain. One book about neuroscience that I found really interesting in the last couple of years was A Thousand Brains by Jeff Hawkins. He describes some of the latest thinking about the architecture of the neural networks in the brain and how they get specialized to handle different tasks. I don’t know how widely accepted his theories are at this point, but I thought there were some nice analogies to the way our digital neural networks learn and become specialized to solve given problems.

That sounds like a good way to describe how both biological and digital neural networks work at the level of the individual neurons. In each case, you have a collection of inputs which are the outputs of other neurons and in the biological case, the cell will “fire” if the sum of the inputs reaches some threshold level. That’s the limit of my knowledge of neuroscience and I don’t know how learning takes place in the biological case at the level of the neurons and whether that bears any resemblance at all to the digital approach with cost functions and back propagation. If you think about how a human infant learns language, there is a least some analogy to “supervised learning”: an adult points and says “Oh, look at the kitty!” Somehow the child is able with just a few repetitions to solve the problem of using the inputs from their visual and auditory cortices to derive the correct information and recognize cats and the associated words.

Thanks Paul, onwards and upwards with the course!

An interesting development Supercomputer equivalent to human brain .
Still it will use gigantic amount of power and space as compared to a tiny human mind.