How a Cold War plan to stop nuclear proliferation could protect the world from an AI arms race

As Mark Twain said, “History doesn’t often repeat itself, but it does rhyme.” We are fortunate that it does, because it means we can use lessons from the past at least as a rough guide to the present. The lessons can be lifesaving—literally—when we apply them to existential risks, of which the three biggest ones in our present time are climate change, weapons of mass destruction, and artificial intelligence.

Of these three, AI must be singled out for the disproportionate amount of attention it has received recently. Triggered by breakthrough developments in the field of generative AI, especially large language models, interest has skyrocketed from both the private sector and government; the global AI market accrued 143 billion dollars of funding in 2023, a significant uptick from 90 billion dollars in 2022.

Much of this breathless interest comes from the promise of AI to revolutionize almost every aspect of modern life. From analyzing medical tests to crafting legal briefs to helping to writers and artists create new works of art, no human activity seems immune to the potential benefits of AI.

The new AI arms race

But along with the promise also come the perils. Because of the potential of AI to transform modern life, an arms race has been set off between countries, especially the United States and China, to see who can harness the power of AI in the fastest, most efficient manner. More specifically, both countries have been going to great lengths to strengthen and corral the two pillars on which AI technology depends—the large amounts of data needed to train the models and the large amounts of computing powe (including the chips) needed to crunch the calculations.

This new arms race arises in significant part due to the military potential of AI. The same AI that can be used in smart labs to analyze medical tests or synthesize drugs can be used on battlefields to better discriminate enemy soldiers from your own and to improve the efficiency of targeting enemy assets. The country that can most efficiently utilize AI in both civilian and military applications will have a significant edge in dictating the world order.

But perhaps the biggest dangers of AI come from what it could transform into in the future. Artificial General Intelligence (AGI) is when AI systems become self-aware and make their own decisions, acting in ways that go beyond and are even counter to the goals that their human creators have programmed into them. While an AI with human-level intelligence is still a distant dream, the striking, completely unexpected progress made by large language models and related systems in the last few years makes the concerns of sudden, unexpected jumps in AI capabilities in the future not unreasonable.

Given the potential for AGI and the race between countries to harness it for both constructive and destructive purposes, the question of AI safety and regulation has become as important as the development of the technology itself. Both because of the speed with which the developments are taking place and the fundamental unpredictability of the technology, we feel we are in virgin territory. And yet, alluding to the Twain quote above, we are not. A proposal crafted almost eighty years ago could be our guiding star in navigating this brave new world.

The Acheson-Lilienthal Report

A few days after the atomic bombs were dropped on Hiroshima, Robert Oppenheimer (one of the author’s grandfather) wrote in a letter to his old teacher, Herbert Smith, that “the future, which has so many elements of high promise, is yet only a stone’s throw from despair.” The promise that Oppenheimer was talking about was the promise that the horror of nuclear weapons might abolish war. The despair that he was talking about was the despair that mankind may not be wise enough to handle this millennial source of power without destroying itself.

In 1946, Under-Secretary of State Dean Acheson asked the Chairman of the Tennessee Valley Authority and soon-to-be Chairman of the Atomic Energy Commission, David Lilienthal, to compose a report analyzing the threat of nuclear weapons and proposing a plan of action for the United States to present to the newly created United Nations. As scientific and industrial counsel, Lilienthal appointed Oppenheimer and a small team of consultants to advise him and craft the report.

Unofficially led by Oppenheimer as the one with the most knowledge, the committee came up with a proposal that presented the peculiar juxtaposition of being both radical and logical. The Acheson-Lilienthal Report was presented to the president and secretary of state in March 1946. Written mainly by Oppenheimer, it contained three key conclusions:

  1. Nuclear energy is intrinsically a double-edged sword; the same reactors that can produce electricity can also produce weapons-grade uranium or plutonium. Because it would be very easy to cheat and very hard to impose an extensive system of inspection, no system of nuclear development that depended purely on policing and inspection would work.
  2. Thus, the only workable solution would be to put all nuclear development, from cradle to grave, from the mining of uranium to its reprocessing, in the hands of an international authority. 
  3. Because states would not trust each other to keep their promise to utilize the technology for only peaceful purposes, the international authority would deliberately spread the means of production equally among all countries. Thus, what was available to one country would be available to all others.

The final suggestion of the Acheson-Lilienthal Report was not just startling—it essentially advocated a form of peaceful proliferation—but also revolutionary. Making all parts of nuclear technology equally available would be like placing a gun with all parts disassembled within equal reach of every state. This would accomplish two crucial goals: One would be transparency. The other, counterintuitively, would be safety: Countries would not be tempted to cheat or build weapons because other countries would also have the same capability to cheat and build weapons.

The Acheson-Lilienthal plan did not come to pass because President Truman naively believed that the Soviets would never get the bomb, and that it was Americans’ duty to keep atomic power in a “sacred trust for all mankind,” and Bernard Barach, who was appointed to negotiate the plan with the USSR, inserted conditions he knew the Soviets would reject.

Instead, the belief that it was possible to monopolize scientific progress with secrecy caused the arms race the scientists had feared. Fission was no secret, and the illusion was quickly dispelled in 1949 when the Soviets tested their first bomb. The reality of fission was a reality about science itself, eloquently expressed by Oppenheimer when he said: “It is a profound and necessary truth that the deep things in science are not found because they are useful; they are found because it was possible to find them.”

What an Acheson-Lilienthal could mean for AI

Even if it failed when it was presented in 1946, a reincarnated form of the Acheson-Lilienthal Report provides critical guidance on how we can deal with AI. Let’s assume that AGI poses a threat as large as nuclear weapons. The main goals for AI, just like the main goals for nuclear weapons, are to ensure that the technology does not get into the hands of dangerous state and non-state actors and to ensure that it is put to good and not harmful use.

But doing so requires acknowledging the truly radical proposal hidden inside the already radical Acheson-Lilienthal—it is the understanding that the proposal can only work if there is no secrecy, but instead relies on verification and open discussion. The plan made it clear that the only way the world could become a less dangerous place was if there was no arms race.

That idea is one that we must embrace if we are to prevent a similar arms race between the United States and China in AI. The naive response would be to trigger another arms race by believing that perfect security is only possible with perfect secrecy, making the same mistake that triggered the nuclear arms race. Secrecy cannot be a viable long-term strategy.

In fact, the problem is worse with AI than it was with nuclear weapons. Secrecy can work to some extent with nuclear weapons because nuclear proliferation is fundamentally limited by the availability of uranium or plutonium. But preventing AI proliferation would entail regulating little snippets of code which float freely in multiple corners of the internet and are disseminated in college courses around the world.

The problem is again eerily similar to the problem with nuclear weapons after World War II when the politicians and generals thought that they could keep weapons technology secret. What they did not realize, and what scientists like Oppenheimer, Hans Bethe, and Leo Szilard kept trying to tell them, was that the fact that a nuclear weapon was possible was the only secret. Once that was out, the rest was a matter of time and talented scientists and engineers, and the Soviet Union had both. Similarly, the only real secret to AI is that the models can be built.

The key question regarding AI security then becomes not whether you should protect model weights or algorithmic secrets—you should—but whether resorting to extreme secrecy will make the world safer for and from AI in the long term. In fact, as with nuclear proliferation, any attempt by the United States to resort to secrecy will trigger bigger attempts by China to protect what it considers its own secrets.

The result will be an arms race in which secrecy and closed models incentivize both state and non-state-based attacks, prevent troubleshooting of systems if something goes wrong and keep citizens at large from participating in the crucial questions of AI safety and regulation.

The path ahead

So, if not imposing complete secrecy, what should we do to make AI safe for the world? As the Acheson-Lilienthal Report suggests, we should adopt the opposite, radical approach. We should let the ship leak, albeit deliberately and in a controlled manner. What we need to do is open-source all AI; the equivalent of having the report’s international authority make all parts of nuclear technology available to everyone.

If we are worried about China stealing our models, let’s make the basic algorithm and code—but not the model weights or critical algorithmic innovations—available to them so that both of us are working from the same common foundation; the barriers to cheating and misusing the technology would then be much higher.

The key role that scientists and engineers have to play in educating both statesmen and the public on these risks cannot be underestimated. One of the biggest mistakes politicians and generals made after World War II was to let their scientific ignorance drive political decisions, thinking that the facts of science had nothing to do with policy. While scientists are not experts in statecraft or diplomacy, they are in possession of facts that can inform and even force certain political decisions. The discovery of fission forced the world to realize that countries would face destruction if they don’t cooperate, and the rise of powerful AI models should force a similar understanding on the world.

Fortunately, at least a few critical players have realized that making AI open is to the benefit of all. The first was OpenAI which was created as an antidote to potential AI threats pointed out by entrepreneurs like Sam Altman, Bill Gates, Elon Musk, and Andrey Karpathy. And despite changes in tactics and business structure, and incentives for profit seeking, openness is increasingly recognized by technology leaders as the antidote to exponential technology risk.

The most recent example is Meta founder and CEO Mark Zuckerberg who announced on July 23 that Meta’s Llama 3.1 AI model will be open source. This model which has 405 billion parameters can in turn be used to fine-tune models with fewer parameters. Zuckerberg’s clear-headed post distinguishes between intentional and unintentional harm caused by AI and proposes that open-sourcing the model would help mitigate both kinds.

He recognizes that having closed models will not only benefit a small number of players but also enable this small number to threaten the entire ecosystem if they choose to. If state actors decide to misuse the models, other state actors can check their actions since they work off the same foundation. If non-state actors decide to misuse the models, states can rein them in, again, because they have access to the same codebase. If the models themselves go rogue, as AGI pessimists fear, the entire world can come together and troubleshoot the problem based on common knowledge. It’s a striking analogy with nuclear technology being equally accessible to all in the Acheson-Lilienthal Report.

In a letter to the United Nations, the Danish physicist Niels Bohr—whose thoughts were implicitly enshrined by Oppenheimer in the Acheson-Lilienthal Report—imagined that the “goal to put above everything else” was “an open world where every country can assert itself only by the extent to which it can contribute to the common culture and help each other with experience and resources.” Secrecy cannot be fully compatible with AI’s promise to bring about huge dividends that could lead to a golden age of human well-being, creativity, and prosperity. But the key to achieving that state would be Bohr’s open world.

No comments

Read more