Digital Link hardware background

Robots: Friend or Foe?

February 8, 2018 • Tech Tips

“Success in creating effective AI, could be the biggest event in the history of our civilization. Or the worst. We just don’t know. So we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it.”

Stephen Hawking

2001: A Space Odyssey. Star Trek. The Jetsons. All depictions of a future or alternate world that (in some cases, 17 years too late) we are finally seeing come to life. All because of Artificial Intelligent.

In the last few years, scientists have come closer than ever to replicating the processes of real human neurons, allowing machines to perform more complex functions. This means that some really cool innovations, like self-driving cars, are in our very near future. We have AI working on problems like climate change, your dating dry spell, and medical diagnostics. It is going to make our world more efficient and, possibly even save us as a species.

But all of those pros don’t come without some cons (see: the entire series Black Mirror).

Besides the obvious worries about job losses, some experts are genuinely fearful that AI could mean disaster for humans. And this isn’t a worry for the next generation – many of these issues are now.

Perhaps you have read about face-swapping technology that makes it not only possible, but easy, to switch people’s faces. On SnapChat, it’s fun – look, I’m a dog and my dog’s a dude! – but when it comes to putting the face of a celebrity (almost always a female celebrity) on an adult actress’ body for porn, it is a little less “innocent-fun” and a little more “creepy-not-so-fun”.

One of the fundamental truths about AI is that humanity and society have not evolved simply because we all followed simple formulas and algorithms, but because of things that are more difficult to code, like complex moral judgments. Morality is a cultural construct, not a binary truth. Think of self-driving cars. There is a popular story ethicists use to discuss complex ideas like morality, called the Trolley Problem. Essentially, if you have a runaway trolley that is on a path to kill an entire group of people, do you pull a track switch to veer onto another path that kills only one person, in order to save many people? Someone has to program the cars to make decisions like this, which means that we are giving a handful of people the ability to make moral decisions for every human who gets into that car.

There are potentially even more sinister implications to AI. Elon Musk and Stephen Hawking have famously warned us about the dangers of AI, and they aren’t just speaking about theoretical trolleys. Sure, it is fun to joke about the human-robot wars, like an absurd Terminator dystopia, but AI puts an unprecedented amount of power and information in the hands of a few people. Such power in the hands of a few has literally never worked out for the best; AI could very easily be used as a tool in our own subjugation if we don’t proactively prepare for such eventualities now. There are also really scary implications for automated weapons and drones in war. Is making it easier to kill people…making it easier to kill people? And are we okay with that?

I think AI is incredibly cool. It has the ability to save us time, money and possibly even lives. There is so much wonderful potential, and humans’ ability to create is truly astounding. But I also think we have an astounding propensity for destruction as well. We need to seriously consider the cost to these innovations.  Because we aren’t just creating Rosie the Robot Maid; we are creating a world we might not be able to control.

What do you think?


Liked this article?

We are adding more useful articles to our blog every week! Join our subscribers to stay up to date on digital security, marketing, and social media trends.

By entering your email, you agree to receive our monthly newsletter. You can unsubscribe at any time!

You may also like: