Categories: Technology

Does AI sentience matter to the enterprise?

[ad_1]

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


Can AI become sentient? That question has been burning through technology circles for several weeks now, ever since a lone Google engineer claimed his LAMDA model had achieved both self-awareness and a soul.

While this is an important question, it is not something that the enterprise needs to concern itself with just yet. Even if such an algorithm were to arise, would it be all that useful in a practical sense?

Artificial awareness

AI sentience has been a topic of debate for decades, but it got a kick-start last month when Google engineer, Blake Lemoine, posted conversations with a chatbot that he claimed proved it was sentient. Lemoine was quickly put on leave, but it’s interesting to note that this was for violating company confidentiality policies, not that the claims of sentience had crossed some kind of line.

Critics were quick to claim that Lemoine had fallen prey to the “Eliza Effect,” a kind of digital anthropomorphism that was first identified in the 1960s when researchers believed that their AI models had become sentient as well. While this may seem like a harmless distraction, it has, in fact, become a serious problem in today’s connected world. 

Eugenia Kuyda, CEO of chatbot developer Replika, says the phenomenon is becoming so widespread these days that people are starting to build real relationships with their talking avatars. Even sometimes to the point that they are reporting their Replikas are being abused by the company’s engineers. Most likely, this was due to the user posing leading questions to the bot, but it still points to a disturbing trend. Users will be able to create companions of their choosing in order to receive more rewarding companionship than they would with a real person.

Philosophers have been debating that nature of sentience since ancient times, of course, and we are still arguing over how much sentience should be ascribed to non-human animals. But author and lecturer Melanie Mitchell says all AI fails the sentience test in several important ways. First, it has no connection to the real world, just the language it has been exposed to. Secondly, it has no memory – turn it off or cease interacting with it, and it doesn’t remember the past. This is crucial to establish a sense of self. Just like any algorithm, AI crunches numbers and determines probabilities – even when it is determining the most natural, emotive way to conduct a dialogue.

Enterprise sentience

But if artificial sentience is not real yet, why are some tech vendors touting their ability to create the “sentient enterprise”? For one thing, ambiguity in terminology has long been a marketer’s dream. However, the fact remains that even though today’s AI is not sentient, it still provides a number of useful capabilities to enterprise data environments. For one thing, EnterpriseAI’s Alex Woodie says it does wonders for call centers, providing natural interactions with an unlimited number of users simultaneously at any time. As well, it is showing remarkable aptitude at everything from process automation toresource management and even security.

If anything, then, the danger in the debate over AI sentience is that it can distract the enterprise from the more urgent issues surrounding the technology’s development and implementation. VB’s senior AI reporter, Sharon Goldman, noted the emerging pieces of legislation and regulatory frameworks that are taking shape in the U.S. and Europe. This pending legislation  could limit where, when and how AI is to be utilized in public and private settings. As well, there is ongoing debate over crucial aspects of AI performance, including bias, ethics and overall control.

If we ever do get to a point where we could define sentience in relation to an artificial intelligence, it’s hard to see how it would benefit the enterprise. In fact, it would most likely pose a number of problems. Sentient beings have wills of their own, after all, and if it also happens to control the mechanisms that produce goods, make sales and manage finances, this would bring a whole new meaning to the term “problem employee.”

Perhaps someday we will be able to implement AI in a fashion that resembles sentience, but for the time being it is probably best if artificial intelligence remains dumb and non-self-aware.

[ad_2]
Source link
Admin

Recent Posts

Practical Tips for Carpet Cleaning on a Budget

Have you ever looked down at your carpet and wondered if there’s a budget-friendly way…

3 weeks ago

The Best CSGO Case to Open in 2025: Top Picks for CS2 Skins

Counter-Strike 2 (CS2) has elevated the thrill of case openings, captivating both seasoned CS:GO veterans…

4 weeks ago

The Most Common Deal Breakers That Make Buyers Walk Away When I Sell My Car Online in Little Rock, AR

Trying to sell a car online should be simple, but sometimes buyers lose interest fast.…

1 month ago

Why Free Spider Solitaire is the Perfect Game for Quiet Evenings

In the hustle and bustle of modern life, finding moments of quiet solace can feel…

1 month ago

Syracuse Guide To Socializating Your Dog

You have probably heard on the importance of socializing dog after getting a puppy. It…

2 months ago

2025 Vision: How Automation is Reshaping the Mortgage Landscape

The mortgage industry is undergoing a significant transformation, driven by the rise of automation and…

2 months ago