Much discussion has been there regarding the potential dangers of artificial intelligence (AI). Also, much concern has been expressed about rapid advancements that are being made in AI research.
In the not-too-distant future, strong AI becomes a reality. As with all of the technologies, AI could also be used for good or for evil; or perhaps it could take matters into its own hands (metaphorically speaking) and also make decisions outside the normal ethical bounds in which society does operate.
A suitable analogy to the possible worldwide disruption has been caused by AI is the proliferation of nuclear weapons after World War II. History does bear witness to the incredible destructive power of thermonuclear devices. They have indeed been used. They can cause much damage. There has been much proliferation of atomic bombs. Various treaties, such as SALT I and II (Strategic Arms Limitation Talks) have codified bilateral agreements that are related to nuclear weapons between the United States and the Soviet Union.
These treaties have been prepared along with a policy of “mutually assured destruction,” that has helped keep the nuclear peace for decades. There are no such treaties in place for AI and one has no way of limiting access to the software that powers AI.
With AI, one does not require exotic materials or plutonium or enriched uranium or for that matter special centrifuges. No ballistic missile delivery vehicles. All that is required is a copy of the source code as well as a computer network. After that, AI could spiral out of control exponentially faster than the spread of nuclear weapons. SALT is “Strategic AI Limitation Treaty, which would involve more than just the cold war superpowers; it should involve at a minimum the G20 nations, to begin with. Controlling the spread of AI will indeed be considerably more difficult than the spread of nuclear weapons, but it is also something we need to start thinking about.
Fifty years ago in the midst of the ongoing Cold War, nations did begin signing an international treaty to stop the spread of nuclear weapons. Today, as artificial intelligence, as well as machine learning, reshapes every aspect of our lives, the world confronts a challenge of similar magnitude and it also needs a similar response.
There is a danger in pushing the parallel between nuclear weapons and AI is too far. But the greater risk lies in ignoring the consequences of unleashing technologies whose goals are neither predictable nor aligned with one’s values.
Advances in AI, as well as machine learning, are moving so fast that today does seem like yesterday, thus making the challenge urgent.
AI has rather provided us with amazingly beneficial tools. Concerns have been expressed about the uses of AI in lethal autonomous weapons.
Machines do increase one’s vulnerability to threats to civil liberties, democracy, and economic equality.
The ultimate solution is the non-proliferation treaty whereby nations do agree to share the beneficial uses of artificial intelligence and accept universal safeguards to protect against the misuse of these powerful technologies.