According to a July 21, 2023, White House statement, seven major AI labs (Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI) have voluntarily committed to delaying the deployment of pioneering AI models (e.g., GPT-5). In the words of the White House, “These commitments, which the companies have chosen to undertake immediately, underscore three principles that must be fundamental to the future of AI – safety, security, and trust – and mark a critical step toward developing responsible AI. As the pace of innovation continues to accelerate, the Biden-Harris Administration will continue to remind these companies of their responsibilities and take decisive action to keep Americans safe.”
At first, this sounds like a welcome move, but I’m still skeptical. If these labs are serious about this move, what happens if other labs, especially the Chinese ones, don’t stop? Does it make really sense? Although US companies are among the leading players in the tech market, the geopolitical component plays an eminent role and should not be ignored.
Many people compare the development of AI to the atomic bomb (perhaps also fueled by the Oppenheimer biopic). There is a telling quote from J. Robert Oppenheimer on “technological sweetness”: „When you see something that is technically sweet, you go ahead and do it, and argue about what to do about it only after you’ve had your technical success. That is the way it was with the atomic bomb.“ And this kind of thinking is the reason why I remain skeptical, because developing new AI models is technically sweet. If there is a bitter aftertaste, it’ll come later.
Stanford scholar and political scientist, Scott Sagan, co-director of Stanford University’s Center for International Security and Cooperation (CISAC), said in a recent interview with Stanford Report what he thought after watching the Oppenheimer movie about the politics of nuclear proliferation, Oppenheimer’s attempts after World War II to constrain the new military technology, and the frightening role nuclear weapons play today, including the threats Russia has made in the ongoing war in Ukraine. “I hope the film really gets people interested in thinking through better ways of managing nuclear technology, in addition to other dangerous technologies,” said Sagan.
And I would add here for example a combination of AI with weapons aimed at building autonomous lethal weapon systems that decide for themselves which targets to kill.
Ultimately, however, mankind will have no other viable choice but to seek global regulation to limit dangerous technologies, as Oppenheimer suggested as early as 1948, although efforts by the International Atomic Energy Agency (IAEA) to prevent the uncontrolled proliferation of nuclear weapons technology have not always been successful. Today, nine states possess nuclear weapons (Britain, France, India, Pakistan, North Korea, and Israel), and several others would like to have them. But how many states, or even private actors (!), would have nuclear weapons today if the IAEA did not exist?
Today more than ever, technology policy and regulatory decisions must also take geopolitical aspects into account.
Foto credits: Mikemacmarketing, Link