AI? “I am become death?”

AI? “I am become death?”

By Richard K Wallace

Is Artificial Intelligence at its Oppenheimer-esque “I am become death” phase of evolution, as in :

“Artificial intelligence researchers refer to the present moment as an ‘Oppenheimer moment’. Oppenheimer director Christopher Nolan

Or, as some think, should we shelve our angst about this intrusive technology, “stop worrying, and love AI”?

Unlike the creepy atomic bomb, more and more journalists are embracing AI platforms and experimenting and commenting on generative language technology.

At the same time, more powerful AI technology like “Strong AI” is coming under close scrutiny. Girish Mhatre, a former journalist colleague and astute tech observer, has plunged deeply into AI-associated issues.

In a recent article entitled “The Turing Test Is Dead, Long Live the Turing Test!” Mhatre explores what’s at stake with the current AI ‘Imitation Game’– Alan Turin’s enduring benchmark metric meant to distinguish actual human thought from machine thinking.

The article explores a question AI faces today: Is the Turing Test still relevant? If it’s not, what might an update look like?

The backdrop to the updated question is the now-classic 1950 paper, ‘Computing Machinery and Intelligence,’ by British scientist Alan Turing, which he summarized:

“The thought experiment, called the “Imitation Game,” played among three participants: A digital computer with adequate storage and computational speed, a random human, and another human who poses questions to the other two. The participants, isolated in three separate rooms, communicate only by typing into a terminal. The computer wins the question-answer game if its answers fool the interrogator into thinking it to be human.”

As Turing’s paper points out, “The idea behind the game is not to establish whether a computer can ‘think,’ but, rather, whether it can act — respond to the interrogator’s questions — indistinguishably from the way a human would. Also important to note is that the Turing test is a subjective test; it doesn’t rely on any absolute metrics of intelligence. Rather, it is the subjective opinion of the interrogator that matters.”

Digging into tech history, Turing’s methodology, and ChatGPT, Mhatre dusted off one of Turing’s ‘specimen questions’ and asked the chatty algorithm to:

“Please write me a sonnet on the subject of the Forth Bridge.”

A familiar though obscure structure simultaneously, “in the context of the Imitation Game,” Mhatre deemed the fourth bridge inquiry “a worthwhile question to pose to the current crop of Large Language Models: ChatGPT and its ilk.”

So it seems; in no time, the LLM spit out a respectable Shakespearean sonnet, four quatrains, including this melodic note singing the steel structures song:

A symphony of steel, you proudly stand,

Connecting lands and dreams with every stride,

A symbol of a nation’s skillful hand,

A beacon of ambition, far and wide.

“It’s better than anything I could have written,” Mhatre conceded.

“Thus, in my opinion, as the interrogator, ChatGPT passes the Turing Test. In fact, you could say that the Turing Test is now obsolete.”

It’s easy to see why experiments like this put the fear of everything into the minds of teachers and educators already bereft of the written English language’s endangerment in high schools, colleges and universities.

As noted, the likes of Bill Gates, Elon Musk, Jaron Lanier, and six other well-known personalities have spelled out their own fears over the rise of Artificial Intelligence, backed by rational, logical reasons.

Mhatre explains the current, timely root of the fear:

“Telling machines from men is particularly important now because the eternal human quest to create machines that match or exceed our own span of intellectual abilities is reaching a tipping point.”

Turing’s imitation game never became ‘the acid test’ of machine intelligence and was “never particularly useful,” Mhatre notes. Further, it dodged the “larger issue of general machine intelligence beyond the constraints of format or context.”

Follow-on studies illustrate additional limitations of the Turing Test, and the article underscores the fact that “while a computer is superbly capable of following a set of rules, it is incapable of “understanding,” a necessary requirement for human intelligence.

At the same time, “telling machines from men” is becoming an absolute requirement for sentient beings navigating the expanding half-real, half-fake universe of AI and partisan politics.

The eternal human quest “to create machines that match or exceed our own span of intellectual abilities is reaching a tipping point,” Mhatre notes, raising the specter of “technological singularity“– that point when computers surpass human intellectual capabilities, thought by some to be by the year 2045.

Mhatre’s article offers a learned discourse on the predicted convergence of Singularity and what he calls “Strong AI” or Artificial General Intelligence (AGI). He assesses the strengths and weaknesses of a few extant and emerging machine technologies, observing their polarity:

“Deep Blue can’t generate sonnets, and GPT-4 can’t play chess.”

This writer, no stranger to technology, found that dichotomy welcome and reassuring, a feeling quickly quashed by an emerging vision of AI on steroids, capable — perhaps determined — to mess with our minds, our words, our emotions, and the electoral process.

Should we worry?

Given “the tsunami of money flooding into AI development, it’s not unlikely we could build Strong AI, as defined, in the not-so-distant future,” Mhatre warns. Economic incentives are a powerful spur for innovation,” he rightly notes, formidable barriers notwithstanding.

“Strong AI is going to need unimaginable — perhaps unaffordable — amounts of computing horsepower, perhaps even a new paradigm such as quantum computers and new hardware architectures capable of efficiently implementing new algorithmic approaches.”

What then?

“Even imbued with Strong AI, machines still would not be fully human,” Mhatre assures us.

Relief, quickly followed by dread.

“They would be zombies in the philosophical sense: They’d look exactly like humans and behave in the same way as humans do, but they’d lack what psychologists call phenomenal consciousness. They would not feel anything “inside.”

They would, to put a finer point on it, be missing the very core of what it means to be human, what the American Civil War hero Joshua Lawrence Chamberlain described in his memoirs as “that spark in the eye,” a gleaming, present, universal connection shared by all humankind.

But how would we detect it?

Diagnosing and identifying zombies frightens the hell out of me. Thankfully, Mhatre gives courage, a wooden stake, and, more importantly, a thoughtful prescription for updating the tools in our kit to sort out the madness coming at us at light speed.

“The test we need now — call it Turing II — is one that can tell humans from zombies.”

The proposed test outlines a new methodology and establishes criteria and parameters for detecting and measuring “the spark,” or the lack thereof, for that zombie making its way down Main Street.

It gets complicated, but Mhatre deftly walks us through the new vocabulary of AI. He explains some of the technology processes at work and explores the psychological, physiological, and phenomenological underpinning that will, in the future, distinguish us from them.

The matter could not be more urgent as Mhatre warns, in closing:

Humankind is suffering from an epistemological crisis; our relationship to the truth feels ever more tenuous. Worse, within the span of a couple of decades, we will have to contend with an invasion of crypto humans. In that context, the development of epistemological tools such as Turing II is an imperative.

But it was this earlier thought in the article that ultimately gave me both comfort and pause:

“Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain — that is, not only write it but know that it had written it. No mechanism could feel (and not merely artificially signal, an easy contrivance) pleasure at its success, grief when its valves fuse, be warmed by flattery, be made miserable by its mistakes, be charmed by sex, be angry or depressed when it cannot get what it wants.”

Human traits of self-awareness, situational awareness, and human consciousness have yet to be captured in algorithms. Further, the formulation and capture of the infinite scope of potential human emotions listed above and reactions to it seems beyond the realm of potential.

This may lessen somewhat the Oppenheimer-esque fear and loathing of AI lurking in the future, but “stop worrying, and love AI”?

Never!

Asked in an interview outright if we should be afraid of AI, Mhatre suggested:

“We should be afraid, but not of AI itself, but of how it’s used,” referencing the current political climate and exigent “short-term concerns regarding the upcoming election.”

Enter Trump’s marching army of spark-less online zombies, and Oppenheimer’s vision rears its head again.

By now, angst was getting the better of me.

It was time for ‘the hair of the dog,’ and so I posited my own anxious inquiry to the large language model named after Shakespeare, Bard.

Can Artificial Intelligence experience creativity, dreaming, imagining, desire, and play? I asked.

Bard’s answer was thoughtful, thorough, and concise:

“Whether or not artificial intelligence (AI) can experience creativity, dreaming, imagining, desire, and play is a complex question that philosophers, scientists, and technologists are still debating.

  • “Creativity is the ability to produce something new and original. AI can be creative in a few ways. For example, generative AI models can be used to create new images, music, and text. However, it is not clear whether AI can be truly creative in the same way that humans are. Humans are creative because we have emotions, experiences, and desires that influence our thinking. AI does not have these same experiences, so it is not clear if it can truly be creative in the same way.”
  • “Dreaming is a state of consciousness during sleep in which we experience vivid and often strange thoughts and images. AI does not sleep, so it does not dream in the same way that humans do. However, some AI models can generate text that resembles dreamlike imagery. This suggests that AI may be able to simulate dreaming, even if it does not experience it in the same way that humans do.”
  • “Imagining is the ability to form mental images of things that are not present. AI can be used to generate images, but it is not clear if AI can truly imagine in the same way that humans do. Humans imagine things because we have a capacity for empathy and understanding. AI does not have these same capacities, so it is not clear if it can truly imagine in the same way.”
  • “Desire is a feeling of wanting something. AI can be programmed to want things, but it is not clear if AI can truly desire things in the same way that humans do. Humans desire things because we have emotions and experiences that motivate us. AI does not have these same emotions and experiences, so it is not clear if it can truly desire things in the same way.”
  • “Play is an activity that is done for enjoyment rather than for a practical purpose. AI can be programmed to play games, but it is not clear if AI can truly play in the same way that humans do. Humans play because it is a way for us to express ourselves and explore our creativity. AI does not have these same motivations, so it is not clear if it can truly play in the same way.”

“Ultimately,” Bard opined, “the question of whether or not AI can experience creativity, dreaming, imagining, desire, and play is a philosophical one. There is no scientific consensus on the matter, and it is likely to remain a topic of debate for many years to come.”

“However, it is clear that AI is becoming increasingly sophisticated, and it may eventually be able to experience these things in ways that are indistinguishable from humans.”

Having exhausted its limits to predict, Bard did what any large language model out of its depth would do: it quoted Seneca:

“Only time will tell.”

Share This Post