The Saga of Nuclear Energy

Alain Bécoulet, author of "Star Power: ITER and the International Quest for Fusion Energy," on the history of nuclear power.
Photo: Viktor Kiryanov, via Unsplash
By: Alain Bécoulet

In a sense, nuclear energy got off to a bad start. The remarkable revolution in science and technology that started toward the end of the 19th century and expanded in the early-20th century assumed terrifying dimensions in the destructive rage of the First, and then the Second, World War. The atomic bomb has come to symbolize divine punishment for human beings and their thirst for knowledge, which has given them the means to kill themselves off once and for all.

This modern myth of Prometheus still shapes the way people think. Anything one says about nuclear energy requires that one distinguish between military and civilian applications. In contrast, chemistry need not account for the assorted explosives that have been used in war and terrorist acts for hundreds of years now, and no one expects biology or medicine to provide a justification that bacteriological weapons exist.

The tumultuous debut of nuclear energy can be precisely dated: August 6, 1945, eight o’clock, 16 minutes, two seconds, Japanese time, when the bomb christened “Little Boy” exploded directly above Shima Hospital in Hiroshima. In a fraction of a second, Little Boy unleashed the equivalent of 15,000 tons of TNT, instantly killing tens of thousands of civilians, leveling everything within a two-kilometer radius, and causing, for years to come, incalculable damage and illness from the blast itself, radiation, and, to a lesser extent, the ensuing contamination. In the most brutal way, the world at war discovered the unbelievable might locked away in the core of the atom.

This article is excerpted from Alain Bécoulet’s book “Star Power: ITER and the International Quest for Fusion Energy.”

Even the bomb’s inventors were surprised — and, in some cases, traumatized for life. Leó Szilárd, the Hungarian physicist who first conceived of the nuclear chain reaction and advocated its military use by the Americans, would later declare: “Suppose Germany had developed two bombs before we had any bombs. [ . . . ] Can anyone doubt that we would then have defined the dropping of atomic bombs on cities as a war crime, and that we would have sentenced the Germans who were guilty of this crime to death at Nuremberg and hanged them?” This nuclear explosion, then, was the culmination of a mad race between the United States and Germany, which lasted for more than three years, to master and wield extraordinary power — a new clash of the Titans, or battle between Good and Evil.

To gauge the “madness” of this race, we should note that the very first test of a plutonium bomb took place in the New Mexico desert on July 16, 1945, a mere 21 days before Hiroshima. Just five days later, on July 21, President Truman officially gave the green light for the operation. Detonating Little Boy over Japan was the result of secret research the U.S. military had conducted from 1939 on, dubbed the “Manhattan Project”; in 1942, the government granted it almost unlimited support. Indeed, at the start of the Second World War, Leó Szilárd and Eugene Wigner had informed President Franklin D. Roosevelt that a new understanding of the uranium nucleus made it possible to develop weapons infinitely more powerful than conventional ones — and, moreover, that Nazi Germany was actively pursuing such arms. The side that managed to make them first would be in a position to crush its adversary. This is precisely what happened in August 1945, except that the bomb fell on Japan; Germany had capitulated three months earlier, without the Allies detonating an atomic weapon over Berlin.

The tumultuous debut of nuclear energy can be precisely dated: August 6, 1945, eight o’clock, 16 minutes, two seconds, Japanese time, when the bomb christened “Little Boy” exploded directly above Shima Hospital in Hiroshima.

Instead of entering into details about major and minor developments in the Second World War or the Manhattan Project, we should retrace the important stages of research and science that made the new weapon possible. From the time of ancient Greek civilization up to the end of the 19th century, physicists viewed matter as being constituted by elementary particles, atoms. The word’s etymology (a-tom, “uncuttable”) points to the idea of wholeness. Only as the 20th century approached did the discovery of radioactivity cast doubt on this certainty. Matter, it turned out, is capable of spontaneously emitting particles and, in this manner, of changing itself. The new generation of physicists included Henri Becquerel, Pierre and Marie Curie, and Ernest Rutherford — to mention only a few pioneers. It was not until the period between the two World Wars that researchers concluded that the atom holds a tiny nucleus with a positive charge, surrounded by a chain of negative electrons, so that the whole forms a neutral structure.


This nucleus, research revealed, is constituted by two (and only two) types of elementary particles, or nucleons: the proton, which contains a positive electrical charge, and the neutron, which, as its name indicates, bears no charge. The simplest nucleus is that of the hydrogen atom, which comprises a single proton. Then come the nuclei on Mendeleev’s periodic table: The helium nucleus is made up of two protons and two neutrons, lithium of three protons and three or four neutrons, and so on. The helium nucleus soon received the title of alpha particle, since it occurs in phenomena of natural radiation (for example, what Marie Curie observed in samples of uranium ore). In the case of lithium, the third element on the periodic table, there are two isotopes (that is, two nuclei with the same number of protons but a different number of neutrons), Lithium-6 and Lithium-7, both of which are stable. In fact, almost all the atoms on the periodic table can occur in the form of one or more isotopes displaying varying levels of stability.

The force that binds nucleons, positive or neutral, to each other is very different from the force that makes electrons orbit the nucleus. It is called strong interaction, in reference to the high coherence exhibited by stable nuclei and their ability to overcome the repulsive force that protons exert on each other. The cohesion of nuclei, scientists determined, depends on the balance between the number of protons and the number of neutrons. This observation led to the coining of the phrase “valley of stability.” For small nuclei, the valley of stability corresponds to a similar number of protons and neutrons; for larger nuclei, it involves a much higher number of neutrons than protons. Stable uranium, for instance, contains ninety-two protons and one hundred forty-six neutrons. If a nucleus possesses a number of neutrons and protons too far away from the valley of stability, it will disintegrate by breaking up into pieces; in the process, it frees up energy in the form of particles that are ejected at great speed. This is the phenomenon of natural radioactivity. Experimentation has confirmed three major types of radiation, each of which corresponds to the emission of specific nuclear particles: alpha radiation, whereby the nucleus breaks up into one or more pieces while emitting a helium nucleus; beta radiation, where one or more electrons are ejected from the nuclear structure as it undergoes modification; and gamma radiation, where one or more photons are released.

The real revolution, for our purposes here, dates to a 1934 experiment performed by Irène and Frédéric Joliot-Curie in France. By bombarding stable atoms with alpha particles derived from natural radiation, they induced a new kind of radioactivity that continued even when the initial alpha radiation had ceased. The nuclei “irradiated” in this manner transformed into new nuclei, which were radioactive themselves. Induced radioactivity had been achieved.

RelatedThe Perils of Being Paul Ehrenfest, a Forgotten Physicist and Peerless Mentor

Word of induced radioactivity spread very rapidly, and the phenomenon was widely documented by other researchers. Just a few years later — on the eve of the Second World War — scientists recognized that it was possible to generate reactions of induced radioactivity in a “chain.” If the right material is chosen, when irradiated by neutrons it will disintegrate and emit neutrons of its own. When such a reaction brings forth more neutrons than it consumes, the pieces are in place for a nuclear reaction that will sustain itself — or even “run away” — provided that a sufficient amount of fissile material (so-called critical mass) is available and that more than one neutron is generated by each reaction in the series. As we have seen, it would only take a few years before the Manhattan Project reached its goal. Many researchers contributed to advances in the field, among others the German physicist Otto Hahn, Niels Bohr from Denmark, and Lise Meitner, who, because she was Jewish, had to leave Austria for Sweden in 1939. Of course, scientists were also hard at work in the United States; their ranks included Enrico Fermi and the aforementioned Leó Szilárd and Eugene Wigner. French researchers including Frédéric and Irène Joliot-Curie played a major role, as well.

The rush to domesticate fission in order to produce electricity started in the United States, the Soviet Union, and France immediately after the Second World War. The Manhattan Project had already represented an initial effort, to a certain extent. From 1942 on, work continued at the University of Chicago, where Leó Szilárd and Enrico Fermi assembled fissile products for the first time: uranium, in the form of metal and oxides, placed in layers on a “neutron moderator” composed of graphite. In this arrangement, which soon came to be known as an atomic pile, the moderator serves to slow — indeed, to capture — a portion of the neutrons produced by the nuclear reaction in order to prevent it from running away in uncontrolled fashion, as occurs in a bomb. This design prevents neutrons produced in the reaction from inducing further reactions and therefore promotes a steady process.

The fundamental principle at work in the very first reactors was to use naturally occurring uranium extracted from ore; in addition to the most plentiful isotope, with 238 nucleons, it contains a small portion of uranium-235, which has three fewer neutrons per nucleus. When uranium-235 absorbs a neutron, it momentarily transforms into uranium-236. Uranium-236 is unstable and undergoes fission in different ways. Hereby, it can generate a nucleus of krypton-93 and a nucleus of barium-140, for instance, or a nucleus of strontium-94 and a nucleus of xenon-140. In the first case, three neutrons are set free, and in the second two. Needless to say, these elements open the possibility of chain reactions. (That said, uranium-235 is present at a level below 1 percent in natural ore. To be used for a chain reaction, the ore needs to be enriched to a level between 3 percent and 5 percent.)

France had no intention of standing at the sidelines. Recognizing the country’s strategic interest in nuclear power for both military and civilian applications, General Charles de Gaulle had the foresight to create the Atomic Energy Commission (Commissariat à l’énergie atomique; CEA) in 1945. The first atomic pile in France, baptized “Zoé,” was launched on December 15, 1948, at the Fort de Châtillon, near Paris. Frédéric Joliot-Curie, now the first commissioner of atomic energy in history, supervised operations, and President Vincent Auriol was their sponsor. Zoé used heavy water — water in which hydrogen atoms have been replaced by one of hydrogen’s isotopes, deuterium — as a moderator.

At the same time, hundreds of laboratories and commercial enterprises started to look for the best way to design fission reactors in order to achieve an optimal combination of performance, power, safety, and reliability. The giants of the worldwide nuclear industry now came into being. Above all the rest towered Westinghouse. This American firm gained the upper hand in the market with a license at the origin of most plants built not just in the United States but also in France and China. In equal measure, efforts were launched across the globe to develop methods of enrichment. As a rule, such technology relied on gaseous diffusion (or gas centrifuges), which offered economic advantages, in particular. Other procedures based on the chemistry of uranium or laser beams also were objects of research.

Today, the world is home to many nuclear reactor networks, which vary in keeping with the kinds of fuel, moderator, and coolant employed. They fall into two major families, according to how the speed of neutrons controls chain reactions. Thermal, or moderated, reactors follow the principle of slowing the neutrons released but not absorbing them, which promotes the fission of uranium-235 or plutonium-239. Fast reactors do not slow neutrons; instead, they make heavy atoms present in the fuel (such as uranium-238 or thorium-232) undergo fission, which neutrons of lower energy cannot break down. In the latter case, scientists speak of “fertile material.” Using uranium-238 directly offers the further advantage of not requiring fuel to be enriched beforehand. Fast reactors, which are also known as breeder reactors, generate fissile material from fertile material in excess of the amounts naturally present at the start of the cycle. As such, they open the prospect of much greater durability: the resources required will be available for thousands of years. Likewise, they have the potential to “burn” radioactive material generated by other kinds of nuclear reaction and can therefore “process” by-products (for instance, plutonium-239).

Some 450 nuclear reactors are in operation across the globe at the moment. Almost all of them are thermal reactors. For the most part, fast-neutron reactors are prototypes in the service of research and development.

For some time now, it has been standard practice to classify fission reactors by generation. Doing so enables us to distinguish between different stages of technological evolution and levels of safety. The first generation of reactors extends from the years following the Second World War up to about the 1970s. Examples include Caller Hall/Sellafield, a facility in use from 1956 to 2003 in England, and Chooz A, built in France by Framatome under license from Westinghouse, which was in operation from 1967 to 1991. The model of reactor known as “UNGG” (Uranium Naturel Graphite Gaz) — or, alternatively, “graphite-gas” — also belongs to the first generation. UNGG reactors were the first examples of French design; inspired by Zoé (the atomic pile), they were built at Marcoule, Chinon, Saint-Laurent-des-Eaux, and Bugey; operations ceased between 1968 and 1994.

Reactors built between 1970 and the end of the 20th century represent the second generation. In France, the nuclear industry flourished during the gas crises; even now, plants dating to this period form the majority of reactors in operation. The same holds for most nuclear power stations across the globe. Second-generation facilities include pressurized water reactors for the most part, but also boiling water reactors and so-called advanced gas-cooled reactors.

The Chernobyl disaster in 1986 occurred on a graphite-moderated boiling water reactor of Russian design, the RBMK-1000 (реактор большой мощности канальный, or “high-power channel-type reactor”). The event revealed flaws associated with second-generation technology, as well as safety problems attendant on the use of equipment: missing provisions for confining radioactive material in the case of accident, manual aspects of operation that posed security risks, inadequate oversight, and, to be sure, suboptimal management of crisis situations. These shortcomings concerned not just the Chernobyl plant in particular but the nuclear industry as a whole — which entered the 21st century with a shaky bill of health. The more recent accident at Fukushima has revealed further risks associated with second-generation reactors, and at multiple levels of operation.

Stronger safety regulations, heightened monitoring of production levels, and improved international communications are in place for third-generation facilities. In addition, technical solutions have been developed to limit — if not eliminate — potential causes of accidents. Reactors designed from the 1990s on, in the wake of the Chernobyl disaster, have been scheduled to go into operation since the early 2010s. They include, in particular, so-called EPR reactors, which now are being built in Finland, France, and the United Kingdom; the first plant of this kind has just started up in China. The chief goals are increased safety and a higher rate of economic return.

In concluding this rapid — and necessarily incomplete — overview, we should note that the research community is developing a fourth generation of designs. For the most part, they are fast-neutron reactors, based on a conception and mode of operation rather different from their predecessors (although, obviously, they will benefit from safety improvements made in the third generation). The goal is to create reactors that will consume fertile materials and, in so doing, reduce the amount of waste that has been generated until now. The undertaking is not entirely new. A fast-neutron reactor has been up and running in Russia since the 1980s. In France, a prototype called Phénix has existed for more than thirty-five years; its industrial extension, Super Phénix, began operation in the 1980s, too, although it was closed down at the end of the 1990s for political reasons.

In 2011, the International Atomic Energy Agency in Vienna launched the Generation IV International Forum to promote and coordinate work in the field. In this context, some half-dozen projects have emerged, bringing together more or less all researchers working on fast reactors from across the globe. France is in charge of one of one of these initiatives, which has been baptized “ASTRID” (Advanced Sodium Technological Reactor for Industrial Demonstration). ASTRID will take up, and improve, key aspects of the Phénix reactor, in particular, the use of melted sodium for cooling.

At this juncture, it’s worth pausing for a moment to discuss nuclear energy’s public acceptability — or lack thereof — over the first 70 years of its history. Fraught with paradox, questions haunt plans even now.


Although the aftershocks of the United States’ use of two nuclear weapons against Japan were felt for some time, the period following the Second World War was synonymous with Allied victory and a strong initiative to rebuild. From here on, conquest of the atom would symbolize power and growth — which, of course, also meant the struggle for supremacy. Only the five countries that possessed atomic technology obtained permanent seats on the extremely exclusive United Nations Security Council: the United States, the Union of Soviet Socialist Republics, the United Kingdom, France, and China. This board promptly barred access to nuclear weapons not just for defeated Japan and Germany but de facto for everyone else, too. Efforts to prevent proliferation had begun. Although it proved difficult to check the exchange of knowledge in an academic milieu, information was rigorously classified to stop technical know-how about weaponry from trading hands; likewise, by civil and military means (including covert measures), access to fertile and fissile material was blocked, as were efforts to enrich uranium-235.

Needless to say, the same period witnessed mounting rivalry between the major winners of the Second World War. The United States and the Soviet Union quickly began to steer a political and economic course in line with the new arms race. The number of weapons constructed on both sides soon warranted the title of “escalation,” with “nuclear umbrellas” extending over the territories of the superpowers’ respective allies. The “Cold War” of deterrence between two giants, waged by means of both truths and falsehood, lasted until 1991, when the Soviet bloc finally broke apart. Up to this point, any number of incidents occurred — sometimes with the prospect of direct nuclear deployment, as in the Cuban Missile Crisis (1962). The first 15 years of nuclear energy, then, centered on the power of weaponry, and the international public had little chance to become aware of peaceful uses for the atom (or, for that matter, the drawbacks that might be involved).

In 1958, researchers were already working to use nuclear fission to peaceful ends and to harness the energy produced by nuclear fusion.

Blocking access to nuclear energy barely lasted for a decade. At its 1958 conference in Geneva, the International Atomic Energy Agency, founded by the United Nations, launched a worldwide program called Atoms for Peace. Proper monitoring would provide a more realistic plan in the world now being rebuilt than wholesale bans. A large portion of nuclear research was declassified, giving rise to international cooperation that was robust, structured, and overseen by the International Atomic Energy Agency itself. Information made available concerned areas and technologies that would ensure peaceful uses of nuclear power; at the same time, strict measures of confidentiality were intended to prevent unauthorized parties from developing atomic weaponry. For the first time, the general public was made aware of the difference between military and civilian applications.

This brings us to the youngest but brightest member in the “atomic family”: thermonuclear fusion, the same process that makes the stars shine. Whereas the first atomic bombs — generically known as “A-bombs” — relied on fission of uranium or plutonium, the escalation that ensued soon prompted governments and militaries to seek a weapon even more powerful. The new kind of bomb relied on fusing two hydrogen isotopes, deuterium and hydrogen, to produce significantly more energy than fission (which uses elements with large nuclei). The process here is not a chain reaction. Fusion requires the collision, at immense speed, of a deuterium nucleus and a tritium nucleus, which yields a highly charged nucleus of helium-4. Nothing in this reaction can give rise to further amplification. What’s more, it can only occur when nuclei are traveling at high speed and collide frequently. The combination of deuterium and tritium must be brought to a state of intensive thermal agitation and prove sufficiently dense — that is, reach an extremely high pressure level (pressure being the product of the density of the medium and its temperature).

The temperatures in question equal those found at the core of stars, ranging from tens to hundreds of millions of degrees. It’s quite the challenge. The quickest and most direct way to obtain such temperatures and pressures is to place the mixture of deuterium and tritium at the center of an A-bomb; the bomb’s explosion will compress the mixture, yielding the conditions necessary to set exothermic fusion into motion. This is the principle behind the H-bomb, which the United States set about developing — in order to stay one step ahead of the competition — after the Russians detonated their first atomic bomb on August 29, 1949. The first American hydrogen bomb exploded on November 1, 1952. Before long, its Soviet counterpart followed.

In 1958, then, when the conference Atoms for Peace was held, the public did not yet recognize the difference between atomic and hydrogen bombs. But researchers were already working to use nuclear fission to peaceful ends and to harness the energy produced by nuclear fusion. When information pertaining to these two branches of nuclear physics was declassified, the gates opened for one of the greatest adventures of the human spirit and mind: controlled thermonuclear fusion.


Alain Bécoulet is Head of Engineering for ITER, an international nuclear fusion research and engineering demonstration project in France. Previously, he was Director of the French Magnetic Fusion Research Institute. This article is excerpted from his book “Star Power: ITER and the International Quest for Fusion Energy.”

Posted on
The MIT Press is a mission-driven, not-for-profit scholarly publisher. Your support helps make it possible for us to create open publishing models and produce books of superior design quality.