AI, is it real Intelligence?

Dear Friends, I have taken 2 courses in AI from Dr. Andrew Ng and 3rd one is in progress. It is very interesting and informative.

I am often getting these questions in my head, if we build a model with Regression or deep neural network, we are essentially providing data to the model(like if…else conditions), can the model really think like humans. Moreover, LLMs like GPT is trained on a larger dataset, are these real intelligence? Cant we simply call it automation?

In fact self driving cars also also trained on data, or is it really thinking like humans?

My second question, if this is automation, I agree it will replace some jobs at bottom level. If it is intelligence, don’t you think it might replace more jobs, considering a study saying human beings uses 1% of their brain.

How do I explain this my collogue, I being a senior software engineer. I want to be clear in my mind or am I missing any larger research work going on?

Thanks for the clarity.

1 Like

Hi @BharathV1!

Well… the question of whether it is real intelligence or automation, is a deep one, and I suspect with no definitive answer, for the time being at least.

On the topic of the LLMs’ intelligence, I really enjoyed this recent Geoffrey’s Hinton interview:

at some point he elaborates on the parallels he finds between the functioning of the human brain and the computations performed by neural networks.

I would say that, although I really admire his thinking and hypotheses, we have no way of knowing whether they will prove to be true :face_with_monocle:

With regards to AI surpassing humans in terms of intelligence, AI is very good at functioning according to “knowledge”/rules that have been properly encoded into the system, as well as discovering new “knowledge” by examining a large amount of combinations (of whatever can be somehow translated into meaningful numbers). As I see it, human intelligence could also be just this (on a larger scale, in some aspects) or maybe there is more to it… consciousness, senses, emotions etc.

I also have the impression that the 1% brain usage is sort of a myth, although as humans we could definitely do better in a lot of cases :smiley:

With regards to the jobs, in a recent interview Andrew Ng stated that although many jobs will become obsolete, he thinks that many more will be created due to AI. I guess, for the time being at least, most jobs require some type of human intelligence and some of them definitely need human interaction… Also LLMs although very impressive and creative, are not really reliable, yet.

I hope this was useful to you :smiling_face:

4 Likes

Thanks for the insights!

1 Like

@Anna_Kay nice response (and yes, the 1 - 20% is not true, or if anything our brains just don’t work like that-- It is not as if we are just ‘avoiding hitting the gas pedal’).

I mean, 60% of the human brain is fat (Essential fatty acids and human brain - PubMed) – does that mean we’re dumb/empty ? (Though, if you wanted to insult someone by calling them a ‘fat head’, you might be correct).

This also reminds me, for quite awhile now I’ve been wanting to write an article about LLMs in the context of Plato’s Theaetetus-- better get around to doing that.

1 Like

@BharathV1 and just to give you my perspective, regarding at least the current state of LLMs, to the question ‘Are they intelligent ?’:

My answer would be ‘no’.

I mean if you look at the underlying structure of self-attention, muli-head attention, and variants of the transformer architecture used here-- There simply is no accounting for developing or formulating any kind of reasoning, at all.

Rather, it is just that these models are so huge, what you get in the end is what I like to think of as a ‘simulacra’, or the ‘appearance’ of knowledge-- but not its ‘understanding’ or ‘awareness’.

Now, strictly, I don’t think that would be impossible. I believe, at least ‘essentially’, in some way we are machines too, of a sort.

And, I mean, even, you yourself can probably think of even a human example of this-- a friend, or someone you met once, that seems to memorize all the details or facts about a problem, even in meticulous detail-- But at the same time you kind of gather maybe they don’t ‘fully understand’ what they are talking about.

I’m not saying they are actually an LLM (or I hope not :grin:), but maybe you get my point.

And… though it doesn’t directly address your question, for some reason it also makes me think of this book I read earlier this year. There is a bit of talk of how God and religion falls into all of this, but the perspective of the author is basically Agnostic at this point. I quite liked it (though then again I also used to study Philosophy, and originally wanted to get into Neuroscience):

3 Likes

Interesting @Nevermnd . Thanks for sharing the book.