MY THINKING

On Technological Sweetness, the nuclear moment and AI

At first, this sounds like a welcome move, but I’m still skeptical. If these labs are serious about this move, what happens if other labs, especially the Chinese ones, don’t stop? Does it make really sense? Although US companies are among the leading players in the tech market, the geopolitical component plays an eminent role and should not be ignored.

Many people compare the development of AI to the atomic bomb (perhaps also fueled by the Oppenheimer biopic). There is a telling quote from J. Robert Oppenheimer on “technological sweetness”: „When you see something that is technically sweet, you go ahead and do it, and argue about what to do about it only after you’ve had your technical success. That is the way it was with the atomic bomb.” And this kind of thinking is the reason why I remain skeptical, because developing new AI models is technically sweet. If there is a bitter aftertaste, it’ll come later.

And I would add here for example a combination of AI with weapons aimed at building autonomous lethal weapon systems that decide for themselves which targets to kill.

Ultimately, however, mankind will have no other viable choice but to seek global regulation to limit dangerous technologies, as Oppenheimer suggested as early as 1948, although efforts by the International Atomic Energy Agency (IAEA) to prevent the uncontrolled proliferation of nuclear weapons technology have not always been successful. Today, nine states possess nuclear weapons (Britain, France, India, Pakistan, North Korea, and Israel), and several others would like to have them. But how many states, or even private actors (!), would have nuclear weapons today if the IAEA did not exist?

Today more than ever, technology policy and regulatory decisions must also take geopolitical aspects into account.