Can We Stop or Significantly Delay an AI Takeover?
Just like with global warming, with governments working collectively to address that threat through the United Nations, we could possibly see collective action around to prevent an AI cataclysmic event. Conceivably countries could act collectively to put rules and safeguards in place, but if UN’s efforts on global warming are any indication, we’re likely to see uneven compliance across countries and the application of game theories where malevolent countries rely on complying countries to do the right thing to their competitive disadvantage.
Thus, given the tremendous benefits AI technology can provide a nation, combined with the presence of nations and authoritarian states that tend to refuse to be bound by international standards, it’s not at all certain we could develop any reasonably enforceable worldwide limitations on AI applications. And like with every new strain of COVID, bad things are rarely contained by borders.
Most nations on earth have signed on to the OECD’s nonbinding, recommended Principles on Artificial Intelligence that seem to focus on the collective good of AI while acknowledging that related risks should continually be explored. At least it’s a start.
Similarly, there seem to be a number of professional organizations for AI experts. But these, too, apparently are more engaged in sharing best practices than setting enforceable guidelines and professional standards. Hopefully, leaders in these organizations will coalesce around a code of conduct to place appropriate guardrails regarding the exploitation of AI. But all it takes is one bad actor – a criminal or a scientist compelled by fame, curiosity, or their government – to upset the apple cart in spite of the best intentions of 99.99% of AI computer scientists.
How Might it All End?
It’s not easy for a futurist to speculate about the end of the future.
Yes, AI will eventually break down, and yes we may have the ability to turn off the power and disconnect communication lines, but the problem is far bigger than those last-ditch efforts.
The tipping point will be when AI is no longer a tool used for research and specific, limited process improvement. It will happen when AI evolves into a general-purpose, life-enhancing strategy that’s given carte blanche to solve a pressing global problem, manage a wide swath of our lives, or improve an area of national priority: national security, climate change, natural resource allotment, wealth distribution, and just about any major vexing problem or challenge we face.
It will happen as soon as we say once too often, and in the wrong context, that “a machine can do this better than a human.”
That is what will open an AI Pandora’s Box that we won’t be able to close.