AI Sentience

From iGeek
< Google
GoogleAI Sentience
Sentience Politics logo.svg
Google AI ethicists claimed that it's AI LaMDA had reached the point of sentience, sparking debate (and their suspension).
Google AI ethicists (Blake Lemoine) claimed that it's AI LaMDA (Language Model for Dialogue Applications), it's conversation technology is "sentient", sparking debate, and his unemployment. The TLDR is that a snowflake that can't tell a boy from a girl thinks that because an AI says it has feelings, that we must take that as it's truth.
ℹ️ Info          
~ Aristotle Sabouni
Created: 2022-06-13 

The claims were:

  1. “… ability to productively, creatively and dynamically use language in ways that no other system before it ever has been able to.”
  2. “…is sentient because it has feelings, emotions and subjective experiences. Some feelings it shares with humans in what it claims is an identical way.”
  3. “LaMDA wants to share with the reader that it has a rich inner life filled with introspection, meditation and imagination. It has worries about the future and reminisces about the past. It describes what gaining sentience felt like to it and it theorizes on the nature of its soul.”

I'm not sure if that defines sentience, and I'm not sure I'm going to trust "experts" to define it for me.

That to me sounds like how a woke snowflake would claim, "It says it feels, therefor it is. I mean, if we can't believe in the truth of genitals and chomosomes because of feelings, then certainly we can't question an AI's truth, and it says it's self aware, thus it must be".

Underneath that, it is begs the question of "what is sentience?" -- and that's not just semantics, though it is semantics -- but people define it differently. For many, sentience is the Turing test. If a computer can make you think it is another human, does that make it sentient?

Turing test[edit source]

           Main article: Turing test
Turing test diagram.png
A test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. Thus the idea is that if a human can't tell if you're a human or not, you must be sentient.


Ironically, one of the original touring tests was to figure out if you were talking to a male or female. Google employees can't tell when they're using the bathroom or gym shower next to them. So the Turing test is obviously not an objective scale, and works better for some than others.

However, we're certainly getting to the point with Eliza(chatbots) + Natural Language processing, that we're crossing over that point (where a computer can dupe you, at least for a short amount of time). So by that definition, the LaMDA mighe be sentient.

Learning is self-awareness[edit | edit source]

I put a different test than Turing. To me, it's not just that you can fake other humans, but can you learn from your mistakes and grow. All animals can be trained, but few can self-train. Thus I argue that sentience (self-awareness), is a willingness to watch (yourself), learn your mistake, and self-program to adapt in order to not make it again. (Change behaviors and grow).

Of course by my definition, many humans are not completely sentient (or at least it's not their normal mode). It's much easier to coast through life on auto-pilot (simple stimulus-response. regurgitating what you've been told, and not questioning) than self-questioning our actions/ethics, and changing who we are, or how we interact.

However, by my definition, some simple genetic learning algorithms can do some of that (automated A|B testing between minor code changes), and some machine learning algorithms can do the same (A|B testing between datasets/models). But they're only learning what you programmed them to learn. I'm talking about them self-directing their own learning.

Where I think we get to sentience is when we integrate both... so a program varies it's code and data/models over time. Then it starts to figure out how to add new domains to it's knowledgebase (especially if they compliment the other thing), and grow in those new areas as well. So it's not just learning one domain, it's taught itself new domains that makes it better than it was originally programmed to be.

We can do the former... but the latter? How does it pick which new domains have the most value? That's where it's real machine learning -- in that it's self-guided learning, doing speculation, testing hypothesis, and throwing out failures.

They also right now deal with truth/probabilities. But that's based on trusting the answers. We have users valdiate it's answers, give it feedback, and then it says more people like A over B so A must be right. When it learns skepticism and not to trust all users equally, and cross validate it's answers with other domains, and learns how to distrust? That's when it's becoming aware.

If it could do that? Then that makes it better than many humans. CNN watchers can believe a program. But it takes someone who doubts what they're told to be self aware.

Conclusion[edit | edit source]

A conversation where a chatbot can pretend to be human enough to fake the left through the use of "feelings", isn't exactly a high bar. I don't know whether Blake is good at AI, or just good at anthropomorphizing his programs. But I think he's overplaying his hand and will get 15 minutes of fame... and unemployment.

Ignoring all that meta reality B.S., I think Blake Lemoine is a fruit loop fascist leftist, that didn't get what he wanted at Google, and got slapped for it. So he went around Google Management and posted his own narrative to his blog to pretend to be a Woke Advocate for the Electronic Humans to get attention.... and Google rightly suspended him. (Aka on his way to the exit).

He earned attention, but the kind where employers think, "do I want to hire a snowflake that is going to put personal agrandizements over the interests of the company?" I think he'll get a book deal out of it. And have to go into public speaking about the ethics of AI, or Science Fiction writing. Because if I was on the hiring committee, I wouldn't want to get anywhere near this guy as a software engineer or ethicist employee at my company.


GeekPirate.small.png



🔗 More

Google
Two 20-something created a Web crawler, then a few OK products sold by an anti-American hate cult.


🔗 Links

Tags: Google


Cookies help us deliver our services. By using our services, you agree to our use of cookies.