Benefits and Risks Of Artificial Intelligence

What is AI

From SIRI to of course self-driving cars, artificial intelligence (AI) is indeed progressing rapidly. While science fiction does often portray AI as robots with rather human-like characteristics, AI can indeed encompass anything from Google’s search algorithms to IBM’s Watson to autonomous weapons.

Artificial intelligence as of now is properly referred to as narrow AI (or weak AI), as it is designed to perform a rather narrow task (e.g. only facial recognition or only internet searches or only driving a car). However, the long-term goal of many of the researchers is to create general AI (AGI or strong AI). While narrow AI may outperform humans at whatever its specific task is, like playing the chess or solving equations, AGI would outperform humans at nearly every cognitive task.

Why Research AI Safety?

In the near term, the goal of keeping AI’s impact on society beneficial motivates much research in many areas, from economics and law to technical topics such as verification, validity, security, and control. Whereas it may be little more than a minor nuisance if one’s laptop crashes or gets hacked, it also becomes all the more important that an AI system does what you want it to do if it controls one’s car, one’s airplane, one’s pacemaker, one’s automated trading system or one’s power grid. Another short-term challenge is no doubt preventing a very devastating arms race in lethal autonomous weapons.

AI might indeed be the biggest event in human history.

There are some who are doubtful if a strong AI will ever be achieved, while there are others who insist that the creation of superintelligent AI is guaranteed to be beneficial.

How can AI be dangerous?

Most researchers do admit that a superintelligent AI is very unlikely to exhibit human emotions such as love or hate and that there is indeed no reason to expect AI to become intentionally benevolent or malevolent.

Is AI a Risk?

The AI is programmed to do something devastating: Autonomous weapons have been acknowledged as artificial intelligence systems that have been programmed to kill. If in the hands of the wrong person, these weapons could indeed very easily cause many mass casualties. Moreover, Durability and Robustness of Blockchain Technology an AI arms race could lead to an AI war that also results in mass casualties. In order to avoid being thwarted by the enemy, these weapons would be designed to be extremely difficult to simply “turn off,” so humans could lose control of such a situation.

The AI is programmed to perform something beneficial: In order to achieve this one needs to fully align the AI’s goals with one’s own, which is quite difficult. For example, in case a super-intelligent system is tasked with an ambitious geo-engineering project, it might indeed wreak havoc with the ecosystem as a side effect, and rather view human attempts to stop it as a threat to be met.

A super-intelligent AI will no doubt be good at accomplishing its goals, and if those goals aren’t aligned with ours, one has a problem.

AI indeed has the potential to become more intelligent than any human.

The Top Myth About Advanced AI

AI’s future impact on the job market needs to be a much-taken note of. It needs to be developed at the human-level. One has to see if this would lead to an intelligence explosion.

Recommended For You

About the Author: Team Techiversy

Leave a Reply

Your email address will not be published. Required fields are marked *