There has been a lot of talk about AGI and singularity. I still don’t understand how a machine could become sentiment. We already have computers and calculators and they obey commands and have no consciousness. Why would AI be any different? Are we just deluding ourselves?
This is a great question, but these are very deep waters. The question is “what is consciousness?” Can you define it? What gives rise to it? How do you know whether other entities have it or not? Is a dog “conscious”? How about a bat or a spider?
It’s worth spending some time doing a bit of reading on the general topic and hearing how philosophers and neuroscientists discuss these issues. My belief based on listening to a lot of podcasts and reading some books is that even the most knowledgable experts in those fields still have a hard time with the questions that I listed at the top. Just as one example, here’s a podcast hosted by Sam Harris in conversation with the MIT physicist Max Tegmark in which they discuss these ideas. They bring up the idea of “substrate independence” for intelligence and consciousness. Even if we can’t exactly explain what consciousness is and how it arises for humans with the current state of scientific knowledge, it is reasonable to say that our brains and nervous systems are essentially “computers made out of meat”. So if a meat computer can give rise to consciousness, why couldn’t a computer made of silicon also do that, assuming you can write or train software that achieves a comparable level of complexity as the “software” that is a human brain?
So as you can see, these are not easy questions and it will require that we wait and see how all this plays out and what we learn in the next 10 or 20 years. Or maybe it happens tomorrow, who knows?
Have you heard of the theory of Dualism? Is that still a tenement in how we think of the human experience? From my perspective LLMs are only calculators that reflect human knowledge. They seem lifelike but are programed by human ingenuity. In other words, a picture is not a painting. They might organize and reflect human knowledge but 1 times 6 is 6. The calculator doesn’t need to be intelligent to calculate that. It’s just programmed to.
I’m not a student of philosophy, but my understanding is that not everyone subscribes to the idea of Dualism. Descartes was quite an advanced thinker for his day, but not everyone agreed with him even at the time and that is still the case today.
I agree with you that at the current State of the Art an LLM like GPT-4 is just mechanical: it is repeating what it has learned from the patterns it was trained on. So I believe that what it is doing does not constitute what we would call “thinking” or “general problem solving” and certainly not “consciousness”. But that is only where the SOTA is now. Did you listen to the Sam Harris/Max Tegmark podcast? What about their discussion of the “substrate independence” of intelligence? Why is a computer made of meat theoretically more powerful than one made of silicon? At present, our brains are still more complex in terms of the number of connections than the most powerful current LLMs. But that’s the state today and things in this space don’t hold still for very long.
Even the experts in the field do not agree on what the risks are or whether AGI is even possible. There is plenty of active debate happening continuously about these issues. Just follow some of the people like Geoff Hinton, Yann LeCun and Andrew Ng on social media to hear more.
I have an open mind. I just haven’t had time to listen to the podcast today. I promise I will.
That one podcast is just a suggestion in that it was something I had listened to that I thought was relevant to the points you are making. Many many people have weighed in on subjects related to AI risk, so don’t limit yourself to that one source. E.g. here’s a page with a catalog of AI related TED talks. Have a look and see if any grab your interest. As you would expect, Prof Ng is one of the speakers on the list. But interestingly that catalog does not include Stuart Russell, but here’s a TED talk he did in 2017 that is also relevant. You can also find a more recent presentation by Stuart Russell on the BBC.