The Breakthrough Problem, or Why the Drugs Don’t Work Like They Used To

Thinking may have become easier, but thinking big is as challenging as ever.
From Pasteur to Pfizer, the unparalleled arc of medical and public health advances in the last two centuries has taught us to expect miracles. Quietly, however, these have been getting more challenging.
By: Michael Bhaskar

Louis Pasteur, aged 57, already the most feted scientist of his age, was on the cusp of a new breakthrough. Pasteur had been studying chicken cholera. While preparing the bacillus, he accidentally left the cultures in his laboratory for the summer. Returning in the autumn, Pasteur stumbled across his old experiment. Picking up the research, he injected a group of chickens with the old bacillus. Unexpectedly they didn’t become severely ill, but actually recovered. Pasteur assumed the cultures had somehow gone off and tried again, injecting those same chickens and a new set with a fresh batch of the disease.

Then something interesting happened. The new chickens died; but the chickens previously injected with the old culture once again survived. Why would those chickens — against all reasonable expectations — live on? Upon hearing the news, Pasteur had an epiphany: “Don’t you see, these birds have been vaccinated!” he reportedly exclaimed.

Vaccination had been known of since at least the late 18th century, when Edward Jenner realized that cowpox could create immunity to smallpox, a devastating killer. And inoculation, the principle behind vaccination, had been known about much longer. But until Pasteur, no one had generalized from there to form a foundational medical principle. He saw the link between his spoilt culture, cowpox, and immunity. Despite everyone knowing about vaccination, it was only he, at this moment, who made the decisive breakthrough. “Fortune favors the prepared mind” is one of Pasteur’s most famous quotes. Few if any minds were as prepared as his.

This article is adapted from Michael Bhaskar’s book “Human Frontiers: The Future of Big Ideas in an Age of Small Thinking.”

Vaccination had first been noticed by chance. But Pasteur, with mounting excitement that gave him sleepless nights, saw the possibility of directing that process. Experimenting with anthrax, he and his team realized that weakened versions of the bacteria produced subsequent generations that were also weakened. In February 1881, he announced his results at the Académie des sciences: Anthrax, a terrible livestock disease, one of the biblical plagues of Egypt, was controllable. Defeatable.

From here Pasteur went on to develop an anti-rabies vaccine, a project of personal importance: He remembered a rabid wolf ravaging his childhood hometown, causing the deaths of eight people. To do this he worked with microbes that were longer acting and more difficult to find: viruses. Even here Pasteur found a way to build immunity, despite being unable to directly see the rabies virus. In the first trial he vaccinated a young Alsatian boy, Joseph Meister, who had been bitten by a rabid dog. The use of a human test subject was far ahead of schedule, but events overtook Pasteur as the clock ticked down on the boy’s life.

Great progress improves both our knowledge and our technology, initiating a virtuous circle.

It was still an awful risk. But Pasteur knew that doing nothing was a death sentence. He had surmised that the vaccination would work ahead of the month-long incubation period. For it to do so, he would need to inject the virus before any symptoms appeared. With a feeling of dread, Pasteur began administering small doses of rabies in the full knowledge he could be making the situation worse. But, after 12 rounds and weeks of sleepless nights, success.

Had Pasteur developed vaccination alone, we would still remember him as a giant of medical science. But this was just the culmination of a series of breakthroughs without which the modern world would be inconceivable.

The germ theory of disease, the technique of pasteurization, an understanding of sepsis and clinical cleanliness, the technique of vaccination, applied to rabies and anthrax, the whole universe of microorganisms and their myriad interplays: It adds up to a legacy of breakthrough after breakthrough, formed in spartan conditions with rudimentary equipment as their progenitor shuffled back and forth between practical problems and high science. Yet these big ideas transformed the human frontiers of knowledge, medicine, health, and even morality.

Pasteur exemplifies a model we have come to take for granted: Great progress improves both our knowledge and our technology, initiating a virtuous circle. But knowledge and technology are always in an arms race against the problems they face. Pasteur gave us a decisive lead.

And yet, how many Pasteurs are working today? That is to say, not how many people are working on medical research or microbiology, but what work therein has or could have equivalent impact? One view suggests that Pasteur stands at the beginning of a generalized increase in the production of ideas. But another argues something else: Yes, we have an increase, but within it there are fewer ideas with the significance of Pasteur’s. Thinking may have become easier, but thinking big is as challenging as ever.


Between the late 19th century and the present, human life expectancy underwent a revolution, underpinned by a series of astonishing advances in medical science and public health. The first real pharmaceutical product, the drug Salvarsan, based on a compound synthesized in 1907, offered a cure for that old scourge syphilis. Three and a half decades later came an even bigger breakthrough: the discovery of penicillin and the age of antibiotics and mass medicine.

For the first time, step changes in medical capability were regular, even expected. We entered a “golden age” of medicine. In the words of author and doctor Seamus O’Mahony, “Medicine, which for most of its history had very limited powers, was quite suddenly marvelously, miraculously effective. There was a golden age of about fifty years, from the mid-1930s to the mid-1980s, when almost anything seemed possible.” Thanks to the discoveries of this period we can kill bacteria and conduct open-heart surgery, transplant organs and produce babies in vitro, regulate pregnancy with a pill and keep people alive on the brink of death in intensive care. And we can eliminate — or at least control — diseases from polio to smallpox.

At the same time, life expectancy, which had remained roughly stagnant for most of human history, improved. Medical progress played a role, but another big idea — public health policy — also came to the fore. Mass public health improvements were key, particularly the establishment of an urban sanitation infrastructure. Private indoor toilets became more available. The transition to cars took horses, and their dung, off the streets. Hospitals multiplied in number and became clean. Doctors grew more knowledgeable, drugs entered the market, and regulated, longer-life items like canned food altered patterns of consumption. Sanitation, better housing, nutrition, cleaner cities and hospitals, better healthcare, safer streets: It was an extraordinary change.

Improvements in life expectancy continued throughout the latter half of the century, if at a markedly slower rate. Whereas previous gains had been concentrated in saving the very young, profoundly and happily changing family life, as the century wore on improvements shifted to the elderly. By 2000, the rate of progress had roughly halved, but progress there was. Until now.

In the UK, the U.S., France, Germany, and elsewhere, we are seeing the first signs that, for complex reasons, life expectancy is no longer improving. Indeed, the U.S. saw consistent falls between 2015 and 2020, the biggest since 1915–1918, the years of the First World War and the Spanish flu pandemic. In Britain a marked slowdown started in 2011, with no progress being made from 2015. At best, Britons are seeing the slowest improvements since the Second World War. The impact of coronavirus is certain to further revise down these numbers. At the frontier, something is going wrong with Pasteur-style breakthroughs. The drugs don’t work; at least, not like they used to.


The discovery of drugs appears to obey a rule christened Eroom’s Law. In a nutshell, the number of drugs approved for every billion dollars’ worth of research and development (R&D) halves every nine years. This pattern has remained largely consistent for over 70 years. Since 1950, the cost of developing a new drug has risen at least 80-fold. A Tufts University study suggests that the cost of developing a drug approved by the U.S. Food and Drug Administration (FDA) rose at least 13 times between 1975 and 2009. By the mid-2000s it was $1.3 billion. In the 1960s, by contrast, costs per drug developed were around $5 million. Timelines, at least pre-Covid, are likewise extended. Eroom’s Law shows that it takes more and more effort and money to develop new drugs. Achieving a pharmaceutical breakthrough is on a trend of increasing difficulty.

Eroom is not a person. Eroom’s Law simply reverses the name Moore, as in Moore’s Law (the idea that the number of transistors on a chip will double every two years, driving an exponential increase in computational power). If anything epitomizes technological optimism it is Moore’s Law. Eroom, meanwhile, the deep pattern of pharma, works the other way around. Advances don’t compound and get easier: the challenges do.

Every year it takes more money, researchers, time, and effort to achieve breakthroughs.

Even in the 1980s there was a scarcity of new drugs. There was a sense, which has only intensified since, that the golden years had ended, that we were, to quote Seamus O’Mahony again, in “the age of unmet and unrealistic expectations, the age of disappointment.” Drug discovery is concentrated in two areas: rare diseases and chronic conditions like blood pressure. Both offer steady, predictable returns. Serious but common diseases have languished, while the challenge presented by something like the common cold still remains. At the same time, pharmaceutical research has a consistent trend for making losses — which doesn’t bode well for the future.

This is all deeply strange. It bucks the basic wisdom that a massive escalation in R&D should see massively escalating returns. The trend is despite numerous advances in the underlying scientific and technological toolbox. Over the 1980s and 1990s, combinatorial chemistry saw an 800-fold boost in the number of drug-like molecules that could be synthesized per chemist. Molecule libraries, the basic building blocks of pharmaceutical research, grew vastly. DNA sequencing improved more than a billion-fold from its beginnings in the 1970s. Such advances are bolstered by powerful new fields like computational drug design. Health-related research now consumes 25 percent of all R&D spending, up from seven percent in the 1960s. Science, technology, and economics all on the face of it imply that drug discovery should be speeding up and getting cheaper.

Eroom’s Law bucks the pattern that began with Pasteur. It suggests a steepening challenge that connects to the slowdown in life expectancy improvements. Every year it takes more money, researchers, time, and effort to achieve breakthroughs. Each and every one of us is affected — our families, our friends, our basic quality of life. When it’s our turn, or the turn of our loved ones, to lie on the hospital bed, these questions feel all too real. Understanding why progress is so uneven has never been more important.

Nowhere is that truer than in the struggle to defeat cancer. In developed countries, 50 percent of people will be diagnosed with cancer in their lifetimes; worldwide, over 17 million patients are diagnosed each year, and this figure is expected to rise to 27.5 million by 2040. Nonetheless, until recently oncology had only three main treatments — surgery, radiation therapy, and chemotherapy: cut, burn, and poison. Many expensive drugs have a bad track record. A study published in the Annals of Oncology concluded that of 47 drugs funded out of a special NHS funding pool, only 18 increased survival rates and even then by just three months; the rest basically did nothing, but came with a host of side effects.

But the news here is hopeful. We have, perhaps, a textbook big idea in the form of immunotherapy: treatments promising to revolutionize the attritional “war on cancer.” Some researchers even compare it to the discovery of penicillin: a turning point that will forever transform the field and change countless lives.

Immunotherapy is based on a sophisticated understanding of the immune system’s molecular biology, homing in on T-cells, a kind of white blood cell. Over the last 30 years, researchers have realized that cancer plays tricks with the T-cells, using the immune system’s own safety checks against it. Cancer essentially fools the body into not attacking it. If scientists could negate cancer’s deceptions, the T-cells (and others) could march into battle unimpeded. Another technique samples someone’s T-cells, re-engineers them to attack their specific, personal cancer and then introduces them back into the patient — these cells are called CAR-Ts (chimeric antigen receptor T-cells). They too hold great promise.

When the 2018 Nobel Prize in Physiology or Medicine went to Jim Allison and Tasuku Honjo, two pioneers of immunotherapy, it matched the announcement in 2015 that former U.S. President Jimmy Carter had been subject to experimental immunotherapy for cancer and had beaten the disease. The arrival of immunotherapies suggests that we are moving up the problem ladder, finally addressing more causally complex, biologically protean conditions having already “solved” simpler conditions.

There is a “but.” To the outsider it seems like a wonderful breakthrough. In fact, the story is much longer and more difficult than that. Immunotherapy’s long and troubled gestation and continuing struggles indicate the challenge of big ideas today. We may be getting there; but the road has been longer and rougher than anyone hoped.

For decades cancer immunotherapy was considered a dead end. Although first mooted in late 19th-century New York, the story of immunotherapy is one of missed opportunities and leads not taken. Most scientists considered it absurd that the immune system could fight cancer; they didn’t believe cancerous cells would ever be recognized as foreign invaders.

Nonetheless, work went forward. False starts were common. One resulted in a 1980 Time headline that heralded a still unproven immunotherapy as “penicillin for cancer.” Failing to live up to the hype, it rocked faith in the underlying principles. Despite some stunning data, trial outcomes were uneven. Funders wanted unambiguous results. Even true believers began to wonder.

Meanwhile cancer research ballooned, consuming eyewatering amounts of money. Over the last 50 years, probably no single research endeavor can match it for funds spent. In 1971 Richard Nixon started a “war on cancer” with the National Cancer Act. When his “crusade” was launched, a cure was thought to be easily achievable; another cycle in a deep pattern of progress that would naturally follow successful treatments for childhood leukemia. The researchers even believed it might be accomplished by 1976, just in time for America’s 200th anniversary.

Yet although there have been improvements in care, the kind of wholesale leaps in progress found in the medical golden age have not occurred. This is not in any way to diminish the extraordinary work of researchers and their institutions; on the contrary, it highlights the colossal challenge they face.

Getting to the point of a breakthrough required major advances in the understanding of cancer and immunity, and billions of dollars of National Institute of Health funding. The first immunotherapy was approved by the FDA in 1992, but even then it remained a fringe treatment. Until the fundamental mechanisms were understood, no pharmaceutical company would take a meaningful risk. Immunotherapy’s poor record and the risk aversion of big pharma meant that getting trials approved was an immense challenge. While the NIH and others continued funding immunotherapy at the margins, other avenues were prioritized.

Like other success stories such as mRNA vaccines, immunotherapy has taken decades upon decades of blind alleys, missed opportunities, failed careers, and cranks grinding away at the margins of science.

The point is that immunotherapy is no sudden breakthrough. Like other success stories such as mRNA vaccines, it has taken decades upon decades of blind alleys, missed opportunities, failed careers, and cranks grinding away at the margins of science, not to mention, in total, truly monumental amounts of research funding and effort. Compare this idea to those of Pasteur, who worked in a basic lab with a couple of assistants. Fleming, Florey and Chain needed a university department and a research hospital; cancer has required tens of thousands of researchers spread across the world’s cutting-edge biomedical research centers.

And we’re still not there. Talk to those close to the research and they mention that the results of clinical trials are patchy: Immunotherapy seems to work for some cancers and patients but not others. Doctors on the front line are often less excited than the companies developing drugs. And although over 2,000 immunotherapies were in trials or the preclinical phase as of 2019, this proliferation creates a new problem: There won’t be room for all those therapies on the market, and the investment boom could once again turn to bust. Immunotherapy prices, moreover, are astronomical: The best-known examples usually cost hundreds of thousands of dollars. Novartis’ CAR-T therapy costs $475,000 per patient. In the short to medium term, it is debatable how widespread a cure it can become. Yes, immunotherapy is hugely significant, an attack on cancer and the medical frontier. But to pretend there aren’t problems, to ignore its attritional gestation, fails to recognize how medical breakthroughs happen today.

The advent of cancer immunotherapy is truly welcome and inspiring. But it doesn’t buck the pattern. It describes the pattern. It isn’t an exception to the breakthrough problem; it is part of it.

From Pasteur to Pfizer, the unparalleled arc of medical and public health advances in the last two centuries has taught us to expect miracles. Quietly, however, these have been getting more challenging. This is not to denigrate figures like Pasteur or the extreme difficulties they faced. After all, in the face of ignorance, scarce resource, poor tools, and little theory he arguably went further and faster than anyone before or since. That’s the point. Somewhere out there is another Pasteur; probably many, many Pasteurs. But it’s inconceivable that they alone could have the equivalent impact, despite having much better conditions, bigger teams, more knowledge, and insanely improved tools.

Eroom’s Law is far from the only example. We face a world where the remaining problems — and the new ones — are of a higher order. At a certain point, endeavors hit a breakthrough problem, where despite the improved capacity for making big new ideas happen, they don’t.

There is nothing inevitable, it seems, about a future rich with big ideas.


Michael Bhaskar is a writer, researcher, and cofounder of Canelo Digital Publishing. He is the author of “Human Frontiers: The Future of Big Ideas in an Age of Small Thinking,” from which this article is adapted.

Posted on
The MIT Press is a mission-driven, not-for-profit scholarly publisher. Your support helps make it possible for us to create open publishing models and produce books of superior design quality.