Consciousness and self-awareness

I think the key to building a machine that has consciousness and self-awareness is in inventing a radically new CPU architecture that executes code by inferring conclusions about the data it detects from reality. Including data about itself and its capabilities.

3 Likes

Interesting point. If you have any ideas about how to actually accomplish that, please share them. Well, if you have such ideas, they will end up being pretty valuable, so I would totally understand if you wanted to keep them proprietary at least for the moment. But my understanding is that neuroscientists don’t currently have a theory for what the actual mechanisms of “consciousness” are. It’s not even clear they have a definition of consciousness that they can agree upon. Meaning that my take is that you’re basically into the realm of SciFi here. :nerd_face:

How much reading have you done in this area? I found Annaka Harris’s book Conscious to be a very helpful guide to the current thinking. Of course there are many other books to be found about consciousness. The book A Thousand Brains by Jeff Hawkins is also definitely worth reading.

2 Likes

I think consciousness is basically self-awareness of one’s own existence and external reality. This begs the question - is human consciousness equal to say a dog’s consciousness? I don’t think so, human beings have consciousness which includes awareness of inferring understanding of reality and the meaning of events taking place in reality. Dogs have a limited form of this additional element of consciousness influenced by primitive instinctual bias.

For a machine to exhibit awareness and consciousness will need a completely different processing architecture, not based on binary operations. Even the ‘smartest’ and most powerful computer today is just a glorified calculator. Nothing more.

So from a practical p.o.v. consciousness includes both knowledge of reality (although I think philosophers and neuroscientists might disagree with you on that) as well as some definition of “self-awareness”. Fair enough. But then the question is why can’t you achieve that with a substrate that is made of silicon instead of flesh? One argument that has been made (e.g. in a discussion between Sam Harris and Max Tegmark on a podcast a few years ago) is that intelligence should be “substrate independent”. The argument is that our brains are nothing more than computers constructed of meat and the fundamental operation is the firing of neurons, which you could view as equivalent to a logic gate outputing either a 0 or a 1. So what is different about a computer made of meat?

Such a radical and innovative processing architecture will require a large array of multiple inferential feedback loops. This is how I believe human consciousness works. We observe something, form an opinion about it, observe again coupled with the previous observation and form a more informed opinion about. This process continues until we are satisfied that our interpretative opinions about the repeated observations result in an understanding of reality.

Isaac Newton was doing this, although he didn’t realise it at the time, when he discovered gravity by repeated observation and interpretative opinion forming of an apple falling from a tree.

But what is the new architecture you are proposing or imagining? What you describe sounds like it could be done with software running on current architectures. In other words, the problem isn’t the fundamental Von Neumann architecture, it’s a just a software question, isn’t it?

Note that there are a lot of AI experts with varying opinions about all this. What you are saying reminds me of the writings of Gary Marcus, who’s one of the experts who takes the position that the current LLMs are simply not up to the task of actual intelligence. LLMs simply don’t have the ability to actually “reason” about things. So the question is what is the software architecture that can implement what you are talking about.

1 Like

You have to think about the vast number of neurons in the human brain and inter connectivity of them. Their interconnectionism inherently implies an inferential feedback architecture stimulated by repeated observations of reality via the physical senses and previous similar observations from recent or long term memory.

Fair enough. But my point is that you started out by stating that we can’t achieve that without new computer hardware different than the Von Neumann architecture that we’re currently using. I’m making the point that I think you’re looking at this at the wrong level: it’s not a hardware problem, it’s a software problem. The current Von Neumann architecture should be fine, but clearly we have a long way to go in figuring out how to write the software.

We’re clearly not going to solve this here in this thread, but it’s fun to think about. Have you taken any of the courses here yet about how Neural Networks work and how to build them?

@ai_is_cool, I just note that you’re making a few rather bold claims based on your opinion rather than proof or analysis.

It makes for an interesting discussion.

Actually maybe there is a new computer architecture coming soon to a theater near you that may contribute to this space: quantum computing! But I have bad news for you: quantum computers also need software to drive them. Yes, the atomic operations may be richer and more expressive than dealing with 0s and 1s, but the atomic operations don’t get you that far.

No one can provide proof of human consciousness but no one doubts its existence because we experience reality and ourselves and others agree on our personal observations of reality but not necessarily of ourselves because of bias.

Yes, that’s one aspect of the so-called “Hard Problem of Consciousness”. It’s totally “in your face” every second that you’re awake and arguably part of the time you’re asleep as well. So there’s no denying it exists, but it’s hard to actually define it or to explain the biological mechanisms that give rise to it. You referred earlier to an example of the questions that naturally arise, e.g. is a dog conscious? Or how about a lizard or a mosquito?

If we wanted to create that in hardware and software, how would we do that?

But you didn’t answer my other question: do you know how Deep Neural Networks work?

Not read yet, but one I’ve heard of this text mentioned (with direct implications to AI): https://direct.mit.edu/books/oa-monograph/5299/Active-InferenceThe-Free-Energy-Principle-in-Mind

When I took Neuroscience now a billion years ago in undergraduate, I focused my thoughts in my final paper on Daniel C. Dennett’s ‘Consciousness Explained’. I think the ideas there might look more out of date now.

@paulinpaloalto as you ask about DNNs— Personally I don’t think they do ‘backprop’, at least in the more modern ways they are structured-- Which not entirely to say it doesn’t happen at all, but it is more ‘roundabout’ back-prop.

I know there is also such a thing as graph neural nets, but they are not taught here and I haven’t gotten around to them yet-- But just from an either realistic or maths perspective, the layout of neurons ala Cajal certainly looks more like a ‘graph’ that a traditional DNN layout.

Consciousness or self awareness must be something that is different from the “mind” of the computer. Because conclusions appear “in” consciousness.

How would you apply that idea to the human mind and human consciousness? Is consciousness also something different from our “mind”? A big part of the difficulty in discussing these subjects is getting to a clear definition of what you mean by the various terms.

This is not a place where we actually do philosophy or neuroscience. So maybe it’s better to get back to talking about neural networks of the software variety.

@paulinpaloalto rewatching in my mind the first time I saw “Pi” screened at Hampshire College-- and, no, I was not stoned :wink:

1 Like

And studying Demystified Mindfulness at the Leiden University at coursera as well, great course to understand and practicing Consciousness

What do you mean by the “mind” of a computer?

@ai_is_cool we are kind of wandering out on to the fringes here, but an analogy I might use is every body of mass has a center which itself gravitates towards.

Of course consciousness is much more complicated than that and has its fringes and effects, but at least at the heart of it there is a unification principle, as well as a particular observers perspective that (generally) does not change.

‘Cogito, ergo sum’ (Descartes’s reduction, though he had another point in mind).

Anyways, back to ‘reality’, I am highly in agreement with you a new ISA is needed… This will… Probably not be your traditional Von Neumann (or even with his paper tape, Turing) type stack machine.

Can it even be done with transistors still ? (Figure the neural networks we design now are even ‘simulations’ on top of billions of basically ‘on/off’ switches).

All good questions I think to answers I do not know. It is obviously not yet ‘native’ at the hardware level though.