Risks of AI and its role in society

October 9, 2017    technology philosophy

This article is part of a series of essays I’ve written for an Introduction to the Philosophy of Science course at KTH on the fall of 2017.

AI is a world-eating technology

Very similarly to what once Marc Andreessen said[1] about software in general, Artificial Intelligence seems to be slowly but surely eating the world.

Its different uses in our future are starting to become nearly ubiquitous: going from self-driving cars to “intelligent” judiciary systems or even voice recognition software in our phones.

One wouldn’t have too much of a hard time finding the reasons why AI will be taking over every little of aspect of our lives. After all, it gives more power to software companies by leveraging the precious data they collect on their users.

AI will also be capable of creating near-sentient systems where users feel like they’re using computers (or phones) with abilities on par with a real human being.

It won’t take an eternity before digital assistants will be capable of planning your schedule or send emails on your behalf.

While these smart features make us go awe about we as humans are capable of achieving, they also raise the question of the risks associated with having these sentient systems around our lives.

Summoning the demon

Elon Musk, the founder of a self-driving car company, confessed a few months ago that he has regular nightmares about awful scenarios where humanity has managed to “summon the demon”.

While we may be tempted to think that the billionaire founder has started to delve into the dark arts on his weekends, the reality is much more sobering.

Elon, whose opinion mirrors hundreds of other experts in the field, thinks that Artificial Intelligence could very easily go “rogue”. If we don’t control the rate at which we develop its capabilities, it can dramatically (and exponentially) improve the way it reasons and thinks, and surpass human intelligence.

After that, one could only speculate what kind of future this scenario would hold for our species. If we were to interact with this artificial intelligence, it would be the equivalent of ants interacting with humans. We could be easily manipulated by it, and its ulterior motives would never quite make sense for us.

This existential threat looming over us calls for strong actions to be undertaken in order to prevent a future of this sort from ever happening.

Defining limits

A first reasonable step would be to define hard limits as to what should we let Artificial Intelligence decide for our lives.

Should we leave judiciary decisions, credit scoring or even housing assignments to AIs ? Doesn’t this prevent us from having to explain the reasons behind each of these types of decisions ?

And even worse: doesn’t this prevent us from ever reasoning about the bias that was already present in our past decisions ?

The field of Artificial Intelligence has quite a way to go before reaching a sentient super-intelligence as smart as the human brain.

Nonetheless, our priority should be first and foremost to make sure that even at its infancy stage, we can stay free of the arbitrary nightmare it may want to impose on us.

References

[1] Why Software Is Eating The World - https://www.wsj.com/articles/SB10001424053111903480904576512250915629460

[2] Elon Musk: ‘With artificial intelligence we are summoning the demon.’ - https://www.washingtonpost.com/news/innovations/wp/2014/10/24/elon-musk-with-artificial-intelligence-we-are-summoning-the-demon/?utm_term=.e2695d3c18be



comments powered by Disqus