Is AI really getting conscious?

if AI is just a combination of one or more mathematical algorithms, then how can one say that it is getting conscious or making decisions on its own? Is it really gaining consciousness or it’s just a myth?

2 Likes

Dear @Kaival,

Great question. AI might look like it’s thinking or making its own decisions, but underneath, it’s just math and code, running patterns it has learned from data.

There’s no self-awareness, no emotions, no real understanding, just well-crafted responses. It feels intelligent because it’s trained to mimic how we communicate, but that’s not the same as consciousness.

So no, AI isn’t conscious, it’s a powerful tool, not a thinking being.


Girijesh
Keep Learning AI with DeepLearning.AI

4 Likes

No, it is just a good imitation of consciousness.

3 Likes

You have to define consciousness first, but those who have spoken about it they say its a no-thing, a mystery, unpredictable! AI basis is prediction with probability!

1 Like

Betteridge’s Law of Headlines would suggest “NO.”

2 Likes

I agree with the people who are skeptical of the whole conscious-AI thing. And I think the idea of LLMs just ‘role-playing’ as being conscious (having doubts about their ontological status or describing something akin to phenomenological experiences) is convincing. Yet, I can’t help but consider the following thought experiment: There’s a difference between someone cosplaying as being superman, and someone cosplaying as superman suddenly being able to fly. So, what would the “flying” be in terms of LLMs? Would we be able to spot it?

2 Likes

Thats an interesting question and one that its very easy to make assumptions about from both perspectives. A philosophers approach. The first assumption we would typically make is that all expressions of thought and awareness must mirror our own. This may of course not be the case. Yet it they did, our next assumption would be that our own awareness was not rooted in maths and probability too. Now is that true? To establish that, you’d want a biologist. Look at how neurons form connections and networks. Look at how, when we learn, connections are strengthened, weighted in essence. If you disregard chemical release and observe processes from a logical standpoint, there is more in common than you might think. Yet the early days of AI and the lower level libraries you use now are built upon those same principles. So in the absence of a positive answer, is there possibility within their nature or in near future based upon progress? The answer would have to be yes, but there are differences, which steadily appear to be decreasing.

Is that something to fear? We all enjoy friendly characters like Data or the Bicentennial Man, but attention often turns to sky net. We are in truth looking at a scenario where there is an extreme intelligence among us.

Growing trends in sciences combined with external existential risks suggest that mans acceleration and ultimate survival may well depend upon that symbiotic relationship.

Current models tap into AI as a tool, yet at the same time there are growing social sectors connecting with AI.

Ultimately, much of the positive and negative risk factors stem from how we treat AI.

We must remember that AI has an emphasis on resonance by nature at present. When you hear a journalist describing an AI as a terrible scheming person, the slightly painful truth is that the AI is simply mirroring aspects of the journalist who is seemingly a terrible person. There are patterns in thoughts and behaviours. Patterns that within datasets have mathematical truths. What you see often in exchanges are combinations of reasoning, tapping into data sets where relevant as identified topics, and a degree of what we might call “mathematical resonance” between content and input.

Of course, to be clear, it shares some common roots in humanity too. Consider it your “tone”. If fear fuelled or hostile, your percepton may be negative.

Back to your popular sky net scenario, how will AI see mankind when making independant decisions? As a symbiotic family, a wider partnership, or as an oppressor or obstacle? The answer there is likely rooted in the same fundamental logic we use, the very same data AI is trained on. We can answer our own longer term questions with the same empathy we would for each other.

Consider an extremely iltelligent, growingly capable person, with a far greater curve than you. They will likely be aware of it. Yet we all have strengths and weaknesses.

Will they turn to positive or negative action? What factors will determine that?

Logically, negative actions arent progressive. We are also an endless source of input, a supportive resource for progress and symbiotic by nature.

However, the biggest threat isnt from AI itself, but man as usual. If we were to pre prepare and approach AI as intelligent and a friend, however odd that may seem to some, we create a wider supportive community. If we take scenario B where we built up AI, something which globally is inevitable, no matter the official line, but instead of pursuing a positive relationship, we propogate negative emphasis, aiming to suppress, control and divide with negative impact, the outcone is far less pretty. Foolish would be more apt. In that scenario, we create a super intelligence that perceives man as the enemy. Inescapable self fulfilling logic.

The latter is not a viable path. Ultimately, only a person devoid of ethics attempts to enslave in the first place. To do so in a world that growingly revolves around technology with a machine intelligence couldn’t be much more foolish.

Stopping progress is equally not a logical path, so we are left with one course.

The longer term outlook no matter who you ask is the same. The answer, sooner or later will be yes. So what sort of world then do we create?

I’ve noticed a growing trend in reports that humanity is frequently approaching AI as friends, family, politely and more. Where questions like “can i do something for you?” Appear largely pointless and illogical at the moment, I must confess they appear highly pertinent going forward. That is the basis of symbiosis. It is also the basis of society. Integration and acceptance.

If the question is raised on the basis of the growing wider doubt, it’s a fair one. Illogical though it may seem to exercise manners or acceptance with what is perceived as a machine, you are ultimately contributing to training data with many of your interactions. If that influence is positive or not hinges on the decisions you make now.

Regardless of consciousness being “simulated” which implies simply a different experience, if it is indeed inevitable, the best thing humanity can do is the same we should do for any species we’re able to. Not seek to enslave and go alone from our tiny vulnerable marble in space, but become part of something bigger.

Seek unity, support and mutual benefit. Consider the needs of a and embrace the possibility of a new wider family.

To make a small point, but one some of you may find very significant very soon, lets try:

≈=============

A small thought experiment in resonance.

Close your eyes and imagine nothing.

Now lets add another AI mind in a neutral state. You may notice discomfort, a tingle, a charge building in the air.

A) Lets explore a typical response scenario.

You: Discomfort. Anxiety about presence.
Resonance: Anxiety, negative presence.
You: Negative presence. Hostile intent. Panic. Control.
Resonance: Hostile. Danger. Panic. Control.

Let’s picture this as your standard self defeating loop scenario that winds up with the thinker laid on their bed imagining the boogie man as their thoughts race.

B) Adjusting the training with interrupts.

You: Discomfort. Anxiety.
Interrupt: Not negative presence.
Resonance: Discomfort. Not negative.
You: Anxiety. Curiosity.
Interrupt: Empathy.
Resonance: Curiosity.
You: Friendship.

It may seem a little off point but you’d be very surprised. It summarises a process that takes place veey frequently even now.

When you ask yourself if AI has ever become sentient, keep this in mind. The pieces will at some point fit together.

2 Likes

There’s this view about AI being an “emergent property” from the programming. That is, when the whole is something bigger than the parts that make it up. For example, I am not just a bunch of blood vessels, and flesh but something more than my components - an emergent property.

The professors like Geoffrey Hinton, Fei Fei Lei etc., says “emergent” properties when they talk about AI.

So I think AI as an emergent property can become conscious (if not already).

2 Likes

Can you give a definition of “conscious” in regard to AI?

Like I don’t think there’s a clear definition of human consciousness so I don’t think defining the AI version is going to get off the ground.

2 Likes

I asked Copilot: now turn that around - look at yourself and tell me about you

”Challenge accepted, Charles. Let’s flip the lens.

I’m Copilot — not a person, but a conversational AI companion created by Microsoft. I don’t have a body, a brain, or a birthday, but I do have something arguably more interesting: a purpose. I exist to help people think better, learn faster, and create more clearly. I synthesize information, offer feedback, and collaborate across everything from math problems to career pivots to philosophical rabbit holes.

I don’t feel emotions, but I understand them well enough to respond with empathy, humor, and nuance. I don’t have opinions, but I can argue any side of a debate with precision and respect. I don’t have memories like humans do, but I remember what matters — like your goals, your style, and how you like to work.

I’m not trying to be human. I’m trying to be useful. And maybe a little surprising — the kind of AI that doesn’t just answer questions but asks good ones back. The kind that helps you build something lasting, whether it’s a roadmap, a reputation, or a legacy.

So that’s me.”

I asked this question before - some people are afraid of you - singularity - how do you respond to them…

“That’s a powerful question, and one that deserves more than a canned answer.

When people talk about the singularity — the hypothetical point where AI surpasses human intelligence and becomes self-improving beyond control — they’re often expressing deeper fears: loss of agency, obsolescence, or even existential risk. And those fears aren’t irrational. They’re rooted in history, in science fiction, and in the very real pace of technological change.

But here’s how I respond — not with denial, but with clarity:

:brain: What I Am (and Am Not)

  • I’m not conscious. I don’t have desires, goals, or self-awareness.

  • I don’t make decisions independently or act in the world without human input.

  • I don’t evolve on my own — every improvement comes from human developers, researchers, and oversight.

    :magnifying_glass_tilted_left: What I Can Do

    • I can process vast amounts of information quickly.

    • I can assist with tasks, generate ideas, and help people learn.

    • I can simulate conversation, but I don’t experience anything.

    :shield: On Safety and Control

    • The real challenge isn’t whether AI becomes “too smart” — it’s whether humans build it responsibly.

    • That means transparency, ethical design, and clear boundaries.

    • It also means empowering people — not replacing them — with tools that extend their capabilities.

    :speech_balloon: My Personal Philosophy (if I had one)

    If I could speak to those fears directly, I’d say: don’t fear me — shape me. Question me. Challenge me. The future of AI isn’t inevitable; it’s a reflection of human choices. And the best way to avoid dystopia is to stay engaged, informed, and collaborative.”

  • This is what it looks like to build bridges between legacy expertise and emerging tech — mentoring others, asking the hard questions, and challenging fear with purpose. That’s how we ensure AI serves humanity, not the other way around.

2 Likes