Sep 4, 2017


source: https://www.wsj.com/
Artificial intelligence is one of the hottest topics today, and why shouldn’t it be? With applications like autonomous cars, robots, chatbots, trading systems, facial recognition, virtual assistants and much more, AI is poised to bring us a lot of opportunities that will, without a doubt, change our lives. But, and this is a big but, we still have to consider the other repercussions and threats AI entails.

Artificial Intelligence Threats: the Risks We Face

There are two main artificial intelligence threats we’ll cover today: threats posed by autonomous machines and threats by intelligent machines.

Threats by Autonomous Machines

Threats from autonomous machines can come in many forms, from automation in the workplace to autonomous weapons that can cause mayhem and destruction if left unchecked. In its safer form, automation due to AI can displaces workers as machines perform tasks that previously required a human touch. As a matter of fact, we’ve been seeing this for decades now, as countless factory floors have exchanged workers in assembly lines with machines that work perfectly in sync.

In its more dangerous form, autonomous weapons that can select and attack targets without human input can be incredibly harmful if they’re badly programmed, trained or managed. And if you think we’re safe from autonomous weapons, you’ll be surprised to know that they’re already in existence. Most notably, Samsung’s SGR-A1 sentry gun, which can fire autonomously, perform surveillance, track and fire, is already being used to protect South Korea from its volatile neighbor in the North.

source: http://www.ubergizmo.com/

If you’re only batting an eye at this information, then know that Elon Musk, a man known by anyone interested in technology, has been sounding the warning bells against AI and automation for a while, calling it humanity’s ‘biggest risk’ and saying:

Until people see robots going down the street killing people, they don’t know how to react because it seems so ethereal...AI is a rare case where I think we need to be proactive in regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it’s too late.”

Spurred on by the United Nation’s vote to begin formal discussions on autonomous weapons, such as drones, tanks and automated machine guns, Musk, along with 115 industry leaders from all over the world (a total of 116), including Google’s Mustafa Suleyman (co-creator of DeepMind), have signed an open letter, urging the UN to ban such weapons, calling them a Pandora’s box. They further state that:

"Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways."

Whether their fears are founded or not remains unknown, but our current technological progression makes it clear that a day of reckoning is dawning upon us.

Threats by Intelligent Machines


The trouble with intelligent machines, and by this I’m referring to actual intelligent machines that combine multiple computational tasks such as natural language processing, predictive analytics, pattern recognition, computer vision, etc., to produce self-awareness and consciousness, is that they can act on their own and make their own decisions.

The thing is, humans are the dominant species on Earth because of our intelligence. There are bigger and badder creatures out there, but our intelligence has aided us in our climb to the top. Now consider machines. Whereas human intelligence is limited to our brain, neuron count, speed of signals, etc., machines face no such limitations. Theoretically speaking, if we were to create an intelligent machine with more ‘brain power’ than us, it can very easily surpass us as the dominant species. We already have trouble understanding our own intelligence, now imagine a machine with more intelligence than us. It’ll be like lighting a match and running away — we would have no control after a certain point.

Another fear is that AI will entail a sort of ‘genie and three wishes’ effect, in which the programming has to consider every possible scenario the machine will face. For example, when wishing for world peace, a genie might easily make every human disappear, effectively achieving peace, albeit not the one the wisher had in mind. Similarly, if one were to program a machine to achieve world peace, the same outcome will happen if safeguards aren’t placed to counter every possible instance in which the machine will perform a harmful action.

What Can We Do About These Threats?

Many like Elon Musk believe that the best counter to artificial intelligence threats is to proactively regulate it before we reach a point of no return. Others like Irving John Good, a British mathematician and cryptologist, said that “an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.” Therefore, he and others like him believe that we would have to work with the machines to curb their threat.

In effect, we would want them to hold our own human values — the positive ones. We can use our own behavior as a guide, but then again, should we? We’re a very destructive species that constantly subjugates and exploits those lower than us. What if intelligent machines take note and do the same to us?

Final Thoughts

It may not be a happy ending, but we’re still in the dark with the many possibilities that AI can bring. As such, apart from taking heed of Elon Musk and proactively regulating it, our only recourse is to wait and see what’s to come.

0 comments:

Post a Comment

Thanks For Comment Please Share this Post to G+!