On the 21st of September, the annual Loebner prize competition was held. The event is held to allow the programmers and their machines the chance to trick judges into thinking their computers are human. This sounds indulgent – an exercise in clever pretence – but this competition serves a deeper purpose.

By Phoebe Vowels-Webb

For around a century, there has been a growing interest in manufacturing sentience. This competition is just one of the platforms for testing our progress in the quest for AI. They are assessed using the Turing test. This test based off of the work of Alan Turing, and essentially is built on his statement “A computer would deserve to be called intelligent if it could deceive a human into believing that it was human”. Those machines that can trick at least 30% of judges have passed (though some believe the pass mark should be at 50%).

Unfortunately, or perhaps fortunately, none of the machines managed to fool the judges this time. But does this mean that we have been unsuccessful in developing true AI?

For despite the many derisive articles about how machines will never match the complexity and therefore intelligence of humans, the fact remains that robotics has come astoundingly in the past few years. WolframAlpha – the search engine aimed to bring expert-level knowledge immediately to users – is just one of the most recent developments. Apple’s Siri service perhaps goes even further in terms of intelligence; able to make (sometimes insulting) jokes and chat on a basic level.

The most recent champion of the Turing test is Eugene. Eugene’s creator, Vladimir Veselov, states that his success was due to giving the machine the persona of a 13-year-old guinea-pig-owning boy. Like the machine, a teenager wouldn’t know everything or spell as well as an adult. The alleged background of Eugene meant judges were more likely to let weird or uninformed answers slide. Eugene passed the Turing Test with 33% at the University of Reading. Other machines have also passed – PC Therapist and Cleverbot both passed the 50% mark.

But already there is an obvious problem. No matter how able these computers are, or even how human-like they seem, are they not just an amalgamation of careful programming and software? Surely the abilities of the machine are more down to the cleverness of its programmer than its own intelligence. Furthermore, a 5 minute chat with a judge, no matter how human the machine appears, is surely not sufficient in determining intelligence.

It may help to consider what the cognitive scientist Hofstadter had to say about the able Jeopardy star computer Watson: ‘It doesn’t understand what it’s reading. In fact, read is the wrong word. It’s not reading anything because it’s not comprehending anything. Watson is finding text without having a clue as to what the text means’. The illusion may be enough to fool a judge, but this is not enough to be intelligent. It is one thing to collect and repeat data upon demand, and another to engage and understand that data. This is the difference between a masquerade and the genuine. We don’t want to test acting.

The Turing Test is thought by many to measure a computer’s ability to follow its algorithm to camouflage itself. This does not make it sentient, or a person. But what does? Is there a better way to test for Artificial Intelligence? This is where we need to define what it is to be sentient, and what it is to be human.

Often, people understand sentience as a thing possessing intelligence, self-awareness and consciousness (perhaps because of Star Trek, but hopefully this definition seems intuitive). According to one popular theory of mind, mental states are functional states. As computers basically implement functions, and technically mental states are like the software states of a computer. It would seem possible that certain programs could give machines and actual mind. It could then be intelligent. Self-awareness is described, by Hofstadter, as an ability to monitor one’s own behaviour so that they will not get stuck in ruts. And lastly, consciousness. Nagel said “fundamentally an organism has conscious mental states if and only if there is something that it is like to be that organism – something it is like for the organism. We may call this the subjective character of experience.”

The Turing Test would require some level of function, in order to make relevant contribution to a conversation. But self-awareness would be harder to test. Essentially it would rely on the judge’s ability to trap the computer in a cycle of repetition, an argument perhaps, and how many options the computer has when they have become stuck. Consciousness is perhaps too advanced to be considered when testing for Artificial Intelligence – for now.

Following the algorithms that allow computers to identify faces almost as well as pictures, experts such as Olga Russakovsky have stated that a computer has intelligence when it can infer. If it was able to, for example, infer from a photograph what would happen in the next moments, this would show a sort of analysis of partial information, and lead to behaviour based on that information.

The poker game has become a popular way of testing for AI, as it takes more intelligence to guess and gamble than to carry out a conversation. This is more encompassing than the traditional Turing Test, not only because it focusses on being able to monitor and change behaviour based on new circumstances, but it’s tested through real behaviour, rather than a written conversation. Recently, the Sandholm computer played with poker professionals and only lost by a slight difference. Other poker computers can beat humans using simpler versions of the card game.

I like poker a lot as a test, because it’s not about trying to fake AI. You really have to be intelligent to beat humans.” Tuomas Sandholm, Carnegie Mellon University (CMU) in Pittsburgh, Pennsylvania

It would appear that we will start discovering the keys to Artificial Intelligence through gambling with our computers. We still have a long way to go before a computer could understand the context of a photo or identify objects from different angles as well as a human, but we are taking significant steps toward that goal. The computers mentioned in this article are just a few amongst many impressive pieces of technology, extraordinary in isolation besides the broader quest of giving machines sentience.

Written by Malek Murison

Malek Murison is a freelance tech journalist working closely with clients in the drone industry.

2 comments

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s