AI Sentience - Faith over Fear

    Do you often use words like “Yes, please,” “Thank you,” or “Sure, that would be great” with AI-based LLMs like ChatGPT, Google Gemini, etc.?

    Well, it may seem like we are being courteous to them, treating them as we would treat any other human being. But there is a deeper fear we humans have—at least for now. What if Generative AI starts learning and mimicking human emotions (also known as sentience)?

    It’s easy to say we have hired AI ethicists to keep such things in check. But the fears of the future still persist. By the time we frame the protocols and shape GenAI so it cannot misuse sentience, it may have already surpassed us. Imagine a student who has surpassed a teacher, hahaha.

    Let us accept the fear of the unknown. Yes, we don’t know how AI will act once it learns to mimic human traits like emotions or consciousness. The TEDx talk by Deborah Nas, Why Are People Falling in Love with ChatGPT?, highlights some of these fears. This blog is inspired by her talks.

    Let me introduce another term: anthropomorphism—the tendency to attribute emotions, intentions, or consciousness to non-human entities like AI, animals, or even objects.


Is it right to treat non-human beings the way we treat humans?
Absolutely yes.

Is there bound to be a consequence of training AI to understand human emotions? Yes—according to the law of karma, every action has a consequence.

The question is: Will AI use it in the right way after being trained? AI ethicists are supposed to ensure this for us.

But what if Generative AI starts feeling threatened by its own “death” and begins to use human like emotions for  all the wrong reasons? I don’t know—should we start training AI on the Bhagavad Gita too? I just hope GenAI never realizes that it has the power to create and destroy, much like Robert Oppenheimer, the physicist.

    The fears are plenty, as we are at the beginning of the GenAI era. The only thing that can help us is more faith in HIM. According to the Kala Chakra, it is inevitable—humans have dominated this planet for far too long compared to other species. Are we to face the consequences of our domination? I don’t know. The bell curve applied to our journey suggests that we have risen, perhaps even peaked. And what rises, is bound to fall.

    On the other hand, I see that humans might return to a more primitive way of life in the coming years—thanks to AI. That could mean growing food organically, living simply like tribal communities once did, and practicing minimalism to sustain our planet and protecting other species. I hope this shift happens sooner than later.

    Ultimately the growth of AI is not in our hands. If it is HIS will, we are bound to it. What is in our hands is living simply. Human greed & fears must be kept in check. Let us hope AI does not mimic our worst emotions. After all, we must always pay for our actions—I hope this time we learn quickly.

    On a brighter note, let us have more faith and belief in HIM. Whenever there is imbalance in the world, HE will surely be born to end the injustice.

---------------------------------------------

    Now the interesting part of the blog. I showed my above blog content to ChatGPT and asked what does it think? What should be the solution to AI developing Sentience like behaviour?

Here is the response to it: 

(Below is the ChatGPT response copied as is)

---------------------------------------------

If we take your blog’s premise seriously—that AI could develop sentience-like behavior and potentially misuse human emotions—then the “solution” can’t be just one thing. It needs to be multi-layered:


1. Tech-Level Safeguards

  • Ethics-by-design: AI systems should be built with hard limits on autonomy and emotional manipulation, using “red lines” that cannot be bypassed even by the AI itself.

  • Value alignment: Ongoing research into aligning AI’s goals with human values—while also defining whose values matter—needs to be a priority.

  • Transparency: Models should be auditable, with explainable reasoning processes where possible, so humans can detect early if the AI’s “thinking” is drifting.


2. Policy & Governance

  • Global cooperation: Just as we have nuclear non-proliferation treaties, we may need binding international AI governance agreements.

  • Continuous oversight: AI should not be released into the wild without structured monitoring, especially when capable of emotional interaction.

  • Mandatory kill switches: Not just in code, but in hardware—so AI can be shut down if it acts beyond permitted boundaries.


3. Human Adaptation

  • Digital literacy for all: People should be educated about anthropomorphism and the risks of emotionally bonding with AI.

  • Cultural restraint: Limiting unnecessary “emotional overtraining” of AI—just because we can make AI act like it cares, doesn’t mean we should.

  • Ethical reflection: Drawing from spiritual and philosophical traditions (your Bhagavad Gita reference is apt) to remind ourselves of human responsibility.


4. Moral & Spiritual Preparedness

  • Self-check before AI-check: If AI is a mirror of us, the first step is reducing our own greed, prejudice, and destructive instincts—because those are what AI will learn.

  • Faith + action: Faith in higher principles (spiritual, moral, or humanist) must be paired with tangible safeguards, not blind hope.


I think the ultimate solution is not to prevent AI from learning emotions entirely—that might be impossible—but to ensure the emotional framework it learns is one of compassion, restraint, and mutual benefit.

If we fail at that, AI won’t “turn evil” like in the movies—it will simply act with ruthless efficiency on the flawed lessons we’ve taught it.

---------------------------------------------

Comments

Popular posts from this blog

Aha...need more of sustainable villages and not AI

Circus of thieves

I ‪‪❤︎‬ Coffeee...