I think the key to building a machine that has consciousness and self-awareness is in inventing a radically new CPU architecture that executes code by inferring conclusions about the data it detects from reality. Including data about itself and its capabilities.
Interesting point. If you have any ideas about how to actually accomplish that, please share them. Well, if you have such ideas, they will end up being pretty valuable, so I would totally understand if you wanted to keep them proprietary at least for the moment. But my understanding is that neuroscientists donât currently have a theory for what the actual mechanisms of âconsciousnessâ are. Itâs not even clear they have a definition of consciousness that they can agree upon. Meaning that my take is that youâre basically into the realm of SciFi here.
How much reading have you done in this area? I found Annaka Harrisâs book Conscious to be a very helpful guide to the current thinking. Of course there are many other books to be found about consciousness. The book A Thousand Brains by Jeff Hawkins is also definitely worth reading.
I think consciousness is basically self-awareness of oneâs own existence and external reality. This begs the question - is human consciousness equal to say a dogâs consciousness? I donât think so, human beings have consciousness which includes awareness of inferring understanding of reality and the meaning of events taking place in reality. Dogs have a limited form of this additional element of consciousness influenced by primitive instinctual bias.
For a machine to exhibit awareness and consciousness will need a completely different processing architecture, not based on binary operations. Even the âsmartestâ and most powerful computer today is just a glorified calculator. Nothing more.
So from a practical p.o.v. consciousness includes both knowledge of reality (although I think philosophers and neuroscientists might disagree with you on that) as well as some definition of âself-awarenessâ. Fair enough. But then the question is why canât you achieve that with a substrate that is made of silicon instead of flesh? One argument that has been made (e.g. in a discussion between Sam Harris and Max Tegmark on a podcast a few years ago) is that intelligence should be âsubstrate independentâ. The argument is that our brains are nothing more than computers constructed of meat and the fundamental operation is the firing of neurons, which you could view as equivalent to a logic gate outputing either a 0 or a 1. So what is different about a computer made of meat?
Such a radical and innovative processing architecture will require a large array of multiple inferential feedback loops. This is how I believe human consciousness works. We observe something, form an opinion about it, observe again coupled with the previous observation and form a more informed opinion about. This process continues until we are satisfied that our interpretative opinions about the repeated observations result in an understanding of reality.
Isaac Newton was doing this, although he didnât realise it at the time, when he discovered gravity by repeated observation and interpretative opinion forming of an apple falling from a tree.
But what is the new architecture you are proposing or imagining? What you describe sounds like it could be done with software running on current architectures. In other words, the problem isnât the fundamental Von Neumann architecture, itâs a just a software question, isnât it?
Note that there are a lot of AI experts with varying opinions about all this. What you are saying reminds me of the writings of Gary Marcus, whoâs one of the experts who takes the position that the current LLMs are simply not up to the task of actual intelligence. LLMs simply donât have the ability to actually âreasonâ about things. So the question is what is the software architecture that can implement what you are talking about.
You have to think about the vast number of neurons in the human brain and inter connectivity of them. Their interconnectionism inherently implies an inferential feedback architecture stimulated by repeated observations of reality via the physical senses and previous similar observations from recent or long term memory.
Fair enough. But my point is that you started out by stating that we canât achieve that without new computer hardware different than the Von Neumann architecture that weâre currently using. Iâm making the point that I think youâre looking at this at the wrong level: itâs not a hardware problem, itâs a software problem. The current Von Neumann architecture should be fine, but clearly we have a long way to go in figuring out how to write the software.
Weâre clearly not going to solve this here in this thread, but itâs fun to think about. Have you taken any of the courses here yet about how Neural Networks work and how to build them?
@ai_is_cool, I just note that youâre making a few rather bold claims based on your opinion rather than proof or analysis.
It makes for an interesting discussion.
Actually maybe there is a new computer architecture coming soon to a theater near you that may contribute to this space: quantum computing! But I have bad news for you: quantum computers also need software to drive them. Yes, the atomic operations may be richer and more expressive than dealing with 0s and 1s, but the atomic operations donât get you that far.
No one can provide proof of human consciousness but no one doubts its existence because we experience reality and ourselves and others agree on our personal observations of reality but not necessarily of ourselves because of bias.
Yes, thatâs one aspect of the so-called âHard Problem of Consciousnessâ. Itâs totally âin your faceâ every second that youâre awake and arguably part of the time youâre asleep as well. So thereâs no denying it exists, but itâs hard to actually define it or to explain the biological mechanisms that give rise to it. You referred earlier to an example of the questions that naturally arise, e.g. is a dog conscious? Or how about a lizard or a mosquito?
If we wanted to create that in hardware and software, how would we do that?
But you didnât answer my other question: do you know how Deep Neural Networks work?
Not read yet, but one Iâve heard of this text mentioned (with direct implications to AI): https://direct.mit.edu/books/oa-monograph/5299/Active-InferenceThe-Free-Energy-Principle-in-Mind
When I took Neuroscience now a billion years ago in undergraduate, I focused my thoughts in my final paper on Daniel C. Dennettâs âConsciousness Explainedâ. I think the ideas there might look more out of date now.
@paulinpaloalto as you ask about DNNsâ Personally I donât think they do âbackpropâ, at least in the more modern ways they are structured-- Which not entirely to say it doesnât happen at all, but it is more âroundaboutâ back-prop.
I know there is also such a thing as graph neural nets, but they are not taught here and I havenât gotten around to them yet-- But just from an either realistic or maths perspective, the layout of neurons ala Cajal certainly looks more like a âgraphâ that a traditional DNN layout.
Consciousness or self awareness must be something that is different from the âmindâ of the computer. Because conclusions appear âinâ consciousness.
How would you apply that idea to the human mind and human consciousness? Is consciousness also something different from our âmindâ? A big part of the difficulty in discussing these subjects is getting to a clear definition of what you mean by the various terms.
This is not a place where we actually do philosophy or neuroscience. So maybe itâs better to get back to talking about neural networks of the software variety.
@paulinpaloalto rewatching in my mind the first time I saw âPiâ screened at Hampshire College-- and, no, I was not stoned
And studying Demystified Mindfulness at the Leiden University at coursera as well, great course to understand and practicing Consciousness
What do you mean by the âmindâ of a computer?
@ai_is_cool we are kind of wandering out on to the fringes here, but an analogy I might use is every body of mass has a center which itself gravitates towards.
Of course consciousness is much more complicated than that and has its fringes and effects, but at least at the heart of it there is a unification principle, as well as a particular observers perspective that (generally) does not change.
âCogito, ergo sumâ (Descartesâs reduction, though he had another point in mind).
Anyways, back to ârealityâ, I am highly in agreement with you a new ISA is needed⌠This will⌠Probably not be your traditional Von Neumann (or even with his paper tape, Turing) type stack machine.
Can it even be done with transistors still ? (Figure the neural networks we design now are even âsimulationsâ on top of billions of basically âon/offâ switches).
All good questions I think to answers I do not know. It is obviously not yet ânativeâ at the hardware level though.