Digital Link hardware background

Should We Be Afraid of Artificial Intelligence?

August 10, 2017 • Tech Tips

By guest author Iain Roberts

This week, the British tabloid press got very excited about computers talking to each other.

A few weeks ago, researchers at Facebook tried making computer programs negotiate with each other in English. The programs spontaneously developed their own grammar, which at first glance looked like nonsense, but turned out to be an efficient form of communication.

Evil robots in science fiction have primed us to fear the worst. If computers are inventing their own language, what else might they do?

In the 1999 film The Matrix, a computer program named Agent Smith tells one of the heroes:

“Human beings are a disease, a cancer of this planet. You’re a plague and we are the cure.”

What if our computers reached the same conclusion?

Agent Smith is conscious. He understands what he is doing and why.

There are interesting philosophical debates over what consciousness is, and whether it can be achieved by computer programs as we currently understand them. These questions may never be settled; but we can be confident our technology is nowhere close to producing an Agent Smith.

Artificial intelligence (AI) programs are engines for playing games. A game has rules and a scoring system. The program takes actions allowed by the rules in order to maximize its score.

Sometimes a program does the unexpected. A classic example is a car racing game. The computer racer was instructed to cross the finish line as quickly as possible; but the finish line was also the start line. So, the program drove backwards by one car length, then forwards to “finish” the race.

A human being might laugh with glee at this sneaky trick, but the racing program doesn’t have emotions. It’s a system for solving mathematical equations, nothing more.

A self-driving car is far more sophisticated, and controls physical objects instead of images on a screen, but the principle is not fundamentally different. The computer doesn’t dream of being Formula One champion. It doesn’t dream at all. For an AI program, nothing exists but the mechanics of the game.

The map is not the territory, as many philosophers have observed. This applies to humans as well. Our brains build a model of the world, which may differ from reality in important ways; but compared to our mental constructs, the games of AI are terribly simple.

In fact, the risks of AI come about because it is not conscious or aware of context. The 1983 film WarGames, in which a naive computer seeks to win a nuclear war, is more relevant than The Matrix. For example, Facebook has an algorithm which reminds you of earlier posts. It plays the game of choosing important posts, with rules set by its programmers. If a post gets a high score, it appears in your timeline.

From a human perspective, the results may be painful. I’ve received cheery reminders of my cat, a few days after I scattered his ashes in the garden. Another person had a similar result when his daughter passed away. In terms of software engineering, this would be challenging but not impossible to fix; but Facebook has limited incentive to do so. It makes money from generating clicks and pageviews, which is not necessarily the same as avoiding distress to its users.

AI is a powerful tool, no more and no less. Like any tool, it can be misused, and the human being who controls it is responsible for the outcome.

Iain Roberts is the pseudonym of a senior software developer at a research institute in Cambridge, UK. He completed a PhD in Artificial Intelligence in 2005, and blogs at http://blog.iainroberts.com.


Liked this article?

We are adding more useful articles to our blog every week! Join our subscribers to stay up to date on digital security, marketing, and social media trends.

By entering your email, you agree to receive our monthly newsletter. You can unsubscribe at any time!

You may also like: