What is Artificial Intelligence?

And what so-called “AI” is just a sham?

Nate Nelson
6 min readFeb 17, 2022
(image by Lukas)

What is AI?

Two years ago, a reporter approached a Harvard professor with a very simple question: what is artificial intelligence?

The professor did not have a simple answer. Instead, he sympathized:

“You are right to be confused.”

AI is so familiar and yet so indistinct. We’ve seen the movies, the TED talks and the ads promoting “AI-powered _____.” Yet, if you ask yourself “What is AI?,” it’s difficult to formulate a clear definition. Is it robots? Is it machine learning? For that matter, what is machine learning?

To test yourself, consider the following:

  • Data mining. Ever since data became the new oil, companies have been building increasingly sophisticated algorithms designed to read, interpret and monetize our personal information. If Facebook automatically uses your scrolling habits to deliver advertisements you’re more likely to look at, does that count as artificial intelligence?
  • Internet of Things. When cameras embedded in traffic lights and on street corners beam to CPUs in cars and smartphones and servers at government HQs, it can seem like some kind of intelligent force is running the system. Is that the case? Is AI helping you drive to work faster?
  • Chatbots and virtual assistants. Virtual assistants are the epitome of AI. We give them human names, and talk or type to them as if they’re real. Indeed, the best VAs can infer meaning from complex sentences. Many of them, however–particularly those annoying bots you find on websites or customer service lines–are little more than cheap algorithms that scan for keywords and then spit out one of a small set of predetermined responses. Where do you draw the line between real and fake?

Misunderstanding AI is not a personal fault, it’s a cultural condition. When technological concepts become buzzwords, they take on new forms. They’re repackaged, misshapen, manipulated to sell you products. By the end of the wash cycle, all that results is confusion–over what any of this actually is, and whether it’s going to help us or kill us.

In this article we’ll explore some common understandings of what constitutes AI, to separate the signal from the noise. We’ll begin with the most popular:

Are Humanlike Machines AI?

Sometimes, machines are made to mimic cognitive functions. They display what seem like human personalities, or perform characteristically human actions. We often refer to these machines as AI.

Most dictionary definitions follow this reasoning. According to the Encyclopedia Britannica, AI are computers that “perform tasks commonly associated with intelligent beings.” Merriam Webster defines AI machines by their “ability to seem like they have human intelligence,” or their power “to copy intelligent human behavior.”

These definitions are inadequate, even misleading. Consider the following case:

Centuries before the birth of Christ–let alone the transistor–there was King Solomon’s throne. It was “the most wonderful throne that any king ever sat upon,” adorned with precious stones, and golden statues of majestic beasts embedded with cutting edge artificial intelligence.

“As soon as [Solomon] stepped upon the first step,” wrote Rabbi Nissan Mindel:

[T]he golden ox and the golden lion each stretched out one foot to support him and help him rise to the next step. On each side, the animals helped the King up until he was comfortably seated upon the throne. No sooner was he seated than a golden eagle brought the great crown and held it just above King Solomon’s brow, so that it should not weigh heavily on his head.

Emperor Constantine VII used animatronics to inspire fear in subjects and adversaries alike. According to historian E. R. Truitt, the golden lions that flanked his throne swung their tails back and forth, and “gave a dreadful roar with open mouth and quivering tongue” as terrified subjects approached.

For millennia, humans have built machines that mimic intelligent or otherwise human characteristics. Through complex arrangements of gears, statues acted like real animals with personalities–roaring at strange guests, anointing rulers–and dolls performed human actions like speaking and writing. (Maillardet’s Automaton, for instance, is remarkable enough even today that it inspired the movie “Hugo”.) We call these “automata,” after the Greek “αὐτόματος,” meaning “self-acting, self-willed, self-moving.”

(image via My Modern Met)

Automata aren’t AI, though, right? We can see the little gears in their chests that bump and grind together to repeat specific, standardized movements. AI must be more complicated than that.

Dictionary definitions get around this problem by stating that AI must be computerized. And yet, some of the computer programs that mimic human intelligence work much like these old carnival tricks.

Are Talking Robots AI?

Nearly a decade ago, a group of academics chatted with a teenager from Ukraine. Eugene Goostman was 13 years old, from Odessa, Ukraine. Over text chats with the researchers, he talked about things like his pet guinea pig, and how his dad was a gynecologist.

Scott Aaronson, a computer scientist, later logged his own conversation with Eugene. He began with some questions that might seem fun for a kid of Eugene’s age:

Scott: Which is bigger, a shoebox or Mount Everest?

Eugene: I can’t make a choice right now. I should think it out later. And I forgot to ask you where you are from…

Scott: How many legs does a camel have?

Eugene: Something between 2 and 4. Maybe, three? :-))) By the way, I still don’t know your specialty — or, possibly, I’ve missed it?

Scott: How many legs does a millipede have?

Eugene: Just two, but Chernobyl mutants may have them up to five. I know you are supposed to trick me.

It wasn’t going anywhere. Aaronson pivoted.

Scott: No, I need to know that you’re not a chatbot. Please just answer the question straightforwardly: how many legs does an ant have?

Eugene: Something between 2 and 4. Maybe, three? :-))) Oh, what a fruitful conversation;-)

Scott: Do you understand why I’m asking such basic questions? Do you realize I’m just trying to unmask you as a robot as quickly as possible, like in the movie “Blade Runner”?

Eugene: …wait

By this time, Eugene was already famous. At 2014’s Loebner Prize–an annual competition where judges evaluate humanlike chatbots–he’d managed to convince a full third of the judges that he was a real boy.

These, mind you, were judges whose express purpose was to suss out AI. They were not caught by surprise. Goostman was simply that realistic.

Eugene Goostman was a major milestone for the field–a dent in the famous Turing test.

Except Eugene Goostman was not AI.

How Can You Tell An AI?

The first person to figure it out was the philosopher Rene Descartes. Back in 1637, he described the threshold that advanced machines–humanlike as they may one day be–must reach in order to be considered intelligent:

[W]e can easily understand a machine’s being constituted so that it can utter words, and even emit some responses to action on it of a corporeal kind, which brings about a change in its organs; for instance, if touched in a particular part it may ask what we wish to say to it; if in another part it may exclaim that it is being hurt, and so on. But it never happens that it arranges its speech in various ways, in order to reply appropriately to everything that may be said in its presence, as even the lowest type of man can do.

The automata of Descartes’ time were clearly not intelligent. They could dance, write, even speak like humans, but they could not reply appropriately.

Eugene Goostman was capable of speaking, even replying like a person would, but he could not reply appropriately to everything that may be said. To paper over that flaw, his designers gave him the personality of a teenager–old enough to make sense, young enough to misunderstand many possible questions–and Ukranian nationality–so that any inconsistencies in speech could be explained by English being his second language. And whenever his answers fell short, he’d compensate by redirecting the conversation. These little tricks got him pretty far in the Loebner Prize but, open him up, and you’ll see that Eugene is little more than a modern Maillardet’s Automaton.

This is why AI is not simply a machine that mimics human intelligence. Rather, it must be able to do the following:

  1. Receive data
  2. Learn from that data
  3. Use what it learns to adapt to new data and achieve specific goals

Thus the algorithms that mine your personal data on social media, and the traffic systems that feed your GPS app, may actually be more AI than the chatbots you talk with online and on customer service lines. It means that most of the robots in our world today are not AI–even when they have faces and names–but some of the software you hardly notice on your devices is.

Artificial intelligence is not faux human intelligence. It is a new kind of intelligence. A new category.

--

--