How Secure Is Our Data, Really?

Economist Michael Kende on how applying the economics of cybersecurity can prevent data breaches and increase digital trust.
By: Michael Kende
Listen to this article
Brought to you by Curio, an MIT Press partner

There are only two kinds of companies — those that have been hacked, and those that will be.
—Robert Mueller, FBI director, 2012

In 2016, for a museum display, Ford fused the left-hand side of a 1965 Ford Mustang with the right-hand side of a 2015 Ford Mustang. The display was meant to demonstrate how much changed in cars over the 50 year span.

Getting into any car in the 1960s was a leap of faith. There were no safety standards or tests. Whereas, for instance, the 1965 Ford Mustang debuted a light in the glove box, the 2015 version has an airbag in the glove box door to protect the passenger’s knees. Not to mention options for crash avoidance, blind spot detection, and lane-departure systems. These improvements in safety came about partly due to regulation, partly due to competition to meet consumers’ increased demands and expectations. The resulting general increase in safety is striking: Controlling for millions of vehicle miles traveled, there were almost five times as many fatalities in 1965 as in 2015.

This article is adapted from Michael Kende’s book “The Flip Side of Free: Understanding the Economics of the Internet.”

Today, putting our personal information into a website is also a bit of a leap of faith. Like the 1965 Mustang, the internet was not originally designed with security in mind. It was designed as a distributed system, connecting multiple networks, with no central core in which to place security. Instead, the key was seen to be trusting those using it, which was easy in the early days when it was used to share resources among academics and researchers who knew one another. According to one of the early pioneers, MIT scientist David D. Clark, “It’s not that we didn’t think about security. We knew that there were untrustworthy people out there, and we thought we could exclude them.” That option is clearly not available today, but, unfortunately, many services available still have not prioritized security at the behest of speed to market and cost saving.

Take, for example, Ashley Madison, a website with the express purpose of enabling married people to cheat with one another. Its tagline is, “Life is short. Have an affair.” To be clear, I was never a client — which I can easily prove, as it turns out, because the website was hacked in 2015 and all of the 37 million users’ names were put online, where they can be searched. I use this example because it illustrates many of the problems with today’s security levels and the corresponding risks.

The website recognized that married people would want to keep their affairs quiet, marketing the ability to have discreet relationships. In the end, it did not really deliver on either part of its marketing slogan. The data revealed that there may not have been that many relationships started through Ashley Madison (many of the women’s profiles were apparently fake, there to attract men to the site); the fact that all the data was breached revealed that the company could not keep anything discreet, either. Of course, the company cannot guarantee a relationship, but it at least should have paid more attention to security.

Ashley Madison advertised its security with a number of security seals and awards displayed on its website, but all of them were made up.

The aftermath of the breach revealed a number of shoddy and fraudulent practices at the company, in addition to the use of female bots to attract male customers. The company advertised its security with a number of security seals and awards displayed on its website, but all of them were made up. Users were offered the option to pay $19 to delete their information, to be able to fully hide their tracks — but Ashley Madison did not actually delete the data, which was released in the hack. Indeed, this deceit may have actually motivated the hackers to release the data out of an odd sense of moral outrage.

The aftermath also inflicted untold costs on those involved. First and foremost, the users with a desire for, if not practice of, discreet relationships were outed. Some were blackmailed, some divorced, and some tragically died by suicide. The CEO, who was forced out, had his own extramarital affair revealed through the release of stolen emails. The company, predictably, was sued in a class-action suit for $578 million and faced government sanctions. But unlike many of the relationships harmed in its wake, the company is still in business and in fact claims an increase in business.

There is, of course, a morality tale to be told in dividing the blame among the users and their usage of the site, the site itself, and the hackers. For our purposes, the interesting point is that a company with the main selling point of discretion was not able to protect its data, and users could not protect themselves from the breach. And the released information indicated a number of mistakes by the company that led to the breach, some of which they knew about and ignored.

Stepping back, a 2019 study showed that 95 percent of such data breaches could have been prevented. There are two main causes of breaches that can be averted.

First, many breaches attack known vulnerabilities in online systems. We are all used to updating the operating system on our computer or phone. One of the reasons is to patch a defect that could allow a breach. But not all of us update each patch all of the time, and that leaves us exposed. Organizations operating hundreds or thousands of devices with different systems connecting them may not devote enough resources to security or may be worried about testing the compatibility of upgrades, and this leaves them exposed to hackers searching for systems that have not been updated. These challenges were exacerbated with employees working from home during pandemic restrictions, often on their own devices with less protected networks.

Second is the phenomenon known as social engineering in which an employee is tricked into providing their password. We have all received phishing emails asking us to log into a familiar site to address an urgent matter. Doing so allows the hacker to capture the user’s email address or user name and the associated password. The hacker can then use that information directly to enter the real version of the website or may find out where else the user may go and hope they use the same login details — which, human nature being what it is, is quite common. These phishing attacks highlight the asymmetric advantage held by the hackers. They can send out millions of emails and just need one person to click on the wrong link to start their attack.

Of course, if 95 percent of breaches are preventable, that means 5 percent are not. For instance, though many breaches result from known vulnerabilities in systems, a vulnerability is by definition unknown before it is discovered. Such a vulnerability, known as a zero-day vulnerability, is valuable for hackers because it cannot be defended against, and they are often hoarded or sold, sometimes back to the company responsible so they can create a patch.

In a zero-day attack, although a breach cannot be prevented, the impact can be mitigated (as is the case for any breach, regardless of the cause). The easiest way, of course, is to not store data of which a breach could be costly. For instance, the Ashley Madison breach was made worse by the release of the details of users who had paid to be deleted. But ultimately, data is essential to the operation of an online service, and some must be stored. It does not have to be easy to use, however. Encryption of data — that is, applying a code to scramble the data — is virtually irreversible if done correctly. Yet in one analysis of breaches, only 1 percent of organizations breached reported that their data had been encrypted, rendering it of no use to the hackers.

In one analysis of breaches, only 1 percent of organizations breached reported that their data had been encrypted, rendering it of no use to the hackers.

This, then, is the economic paradox at the heart of cybersecurity. The victims are not abstract or distant: They are the companies’ own customers. The economic costs of a breach can include harmed corporate reputation, lost customers and sales, lower stock price, lost jobs for executives, significant costs to repair the damage, and lawsuits. Yet the number of preventable breaches keeps increasing, along with the amount of data breached, and executives and their boards have not all been fully shaken out of their complacency yet. What can explain this?

Typically, when there is an economic paradox such as this, when one cannot understand the marketplace outcomes, one looks for a market failure. A market failure is a glitch in a market that participants in the market cannot, or will not, sort on their own, such as pollution. A market failure can only be addressed by a third party, typically, but not always, the government. This brings us to the economics of cybersecurity, in which there are three potential market failures, and third-party solutions are needed.

Public Goods

The very strength of the internet model masks an underlying weakness. Internet protocols are open standards and often rely on open-source software, which anyone can use without payment. This has all the features of an economic public good.

Take the example of public broadcast television as a public good. Once the signal is transmitted, anyone with a television can watch the channels. Further, my watching does not take away the ability of anyone else to watch. In other words, it is free to watch the channel, and there is no impact on anyone else by doing so. Public goods such as this have many great qualities, with many social benefits. However, they are not just public because anyone can use them; in a sense, they are also public because they are typically facilitated or financed by government, even if provided by private companies. This is because of the free rider problem with public goods.

Think about what would happen if a for-profit company decided to offer a public broadcast channel, with educational and cultural broadcasting and no advertising. They start broadcasting and ask people to pay. People would realize quickly that they would be able to receive the channel even if they did not pay, so long as others paid, and some would begin to free ride: watch without paying. This limits the incentive of for-profit companies to offer public goods even if they are valued by the audience, and that is a market failure. As a result, public broadcasters in many countries, such as the BBC in Britain, charge an obligatory license fee to every household with a TV to finance the cost of the broadcasts.

The development of open-source software is also a public good. Once it is done by someone, it is available to all, and this can lead to free riding. This is not to say that the development of open source is not an incredible achievement of researchers, engineers, and companies volunteering together to build software for all, and in many cases there are less defects than in proprietary software. However, it is possible for pieces to slip through the cracks.

A particular downside is a lack of resources to invest in improving the software, including for security purposes. For example, in 2014, the “Heartbleed bug” was discovered in OpenSSL, an open-source software library for securing online transactions that is used by many large websites, including Google, and companies making servers, including Cisco. The bug, it turned out, had made users vulnerable to hackers for the previous two years. It was viewed as potentially catastrophic, estimated to impact up to 20 percent of secure web servers, and the cost of identifying the risks and addressing them was estimated at $500 million.

In the aftermath, it was quickly determined that the initiative developing OpenSSL had been receiving only $2,000 a year in donations and was maintained by just one full-time employee and a few volunteers. The Core Infrastructure Initiative was quickly set up with funding from many of the major software companies to fund OpenSSL and other similar critical open-source initiatives. Although the wake-up call could have been much worse if the bug was more widely exploited, it shows the mismatch between the importance of the software and the available resources. It also highlights many of the positive aspects of open source, which should not be dismissed: the willingness of volunteers to work on the software, the responsibility of the community that found and reported the bug, and the quick reaction once the underlying lack of resources was identified.

Information Asymmetry

The OpenSSL story highlights another market failure in cybersecurity: As consumers, we have very little way of knowing how securely our software, devices, and systems are created. This is known generally in economics as asymmetric information, a market failure that comes up often in our lives. It comes up whenever one side of a potential transaction has more information about the transaction than the other side.

When you buy a used car, the seller knows more about the condition of their car than the buyer ever could; when buying car insurance, the driver knows more about their driving habits than the insurer does; and entering a restaurant, the chef knows more about the quality of the food and kitchen hygiene than the diner. When the truth is revealed, it might be too late.

This market failure impacts the willingness to buy or sell a good or service. Think about the price of car insurance in a competitive market. Companies have to set the yearly premium and the deductible that the owner pays in case of an accident. What happens if a company has one plan, with a premium and deductible aimed at the average driver? Bad drivers will happily take that plan because it is a good deal given their driving history. Good drivers, on the other hand, will find it to be too much and go elsewhere. So the insurance company will be serving more bad drivers than good and will have to continually raise the premium and/or the deductible until they are stuck with the riskiest drivers. This type of situation is sometimes known as a death spiral.

That insurance company would prefer to elicit a credible signal to separate the good drivers from the bad. A signal is credible if only the party wanting to share positive information — the good driver — could afford to make it. For instance, a car insurance company can offer two plans — one with a high premium and low deductibles, and another with a low premium with high deductibles. Someone who knows they are a bad driver is unlikely to want to pay high deductibles every accident, but a good driver could afford to do that. They will save money by taking the low annual premium and only pay the high deductible in the rare case they cause an accident.

RelatedThe Fantasy of Opting Out

But sometimes there is no way to make a signal credible. Your toothpaste may say it has fluoride in it to help fight cavities, but how do you know for sure? How do you know if your airbag is going to work in a crash? You can test drive a car, but you can’t test the airbags. Will your new hair dryer really shut down if it falls in your bathtub? And restaurant reviewers cannot know how hygienic the kitchen is when they are sitting in the dining room.

In some cases, private organizations will do the testing on behalf of consumers: Think of Consumer Reports for many products, the European New Car Assessment Program for car safety, or UL (formerly Underwriters Laboratories) for electrical appliances. But often, the solution in these cases of market failure is government. The government can set standards; it can provide consumer protection for false claims; it can test products itself; and it can impose liability in case of failure (as discussed further ahead).

This brings us to cybersecurity. The OpenSSL case in one way is not about asymmetric information per se; it was an honest mistake that even the hard-working volunteer developers did not know about. On the other hand, until it was investigated, few realized how much trust they were putting into software supported with so few resources. The Ashley Madison case was willful: Users in general relied on false claims of security and in particular could not know that the records they had paid to have deleted were not, in fact, deleted. The company knew it, but the users did not.

The bigger issue is that even companies that have put significant resources into cybersecurity have trouble providing a credible signal that they have done so. As a user, about to choose a critical service such as an online bank, how can we determine which really have put resources into protection and which ones are simply stating that they have done so? Prior to the announcement that all three billion Yahoo! user accounts were hacked, how could the average user have known that they were at greater risk using Yahoo! than Gmail?

How can we determine which services have put resources into protection and which ones are simply stating that they have done so?

One source of cybersecurity ratings relates to insurance. Cybersecurity insurance is potentially a significant market, given the exposure of companies to hacks. However, insurers have difficulty providing cyber-risk policies, given the lack of information about attacks, exposure, and risk. Companies are emerging to help the insurance industry by rating the exposure of organizations seeking insurance from cyberattacks. For example, one initiative by the insurance industry helps its customers identify products and services that lower cybersecurity risks. Such a joint insurance initiative is relatively rare, but interestingly similar to one that the industry undertook in the 1950s to increase road safety.

At the end of the day, if organizations cannot make credible claims — certified by third parties — of their cybersecurity levels, and know that none of their competitors can either, why should a company fully invest in cybersecurity? Users will not be able to test which services have the best security in any case, so why bother? That is the ultimate market failure with asymmetric information in this situation: There is no guaranteed upside of investing more in cybersecurity, so the investment will not be sufficient.

It is made worse by the fact there is not enough downside from underinvesting, as we see next.

Negative Externalities

An externality is another example of market failure. It comes up when an economic activity has an impact on others — negative or positive — that is not reflected in the cost. The result is inefficient because too much, or too little, of a good or service is produced by not considering the full social impacts.

If you are trying to sell your house, the state of your neighbor’s property can have an impact. Your neighbor mowing the lawn, trimming the hedges, painting the house, throwing away junk from the backyard can all make your house more attractive. On any given day though, your neighbor will not factor your house value into his or her decision to clear up. That is the impact of an externality. They also arise more generally with pollution. Dumping waste into a river only impacts those downstream.

Typically, if moral suasion does not work — either with your neighbor or a chemical plant — third-party action is required. For neighbors, that could be a homeowners’ association that can set and enforce standards. Often, though, government action is needed to remedy externalities. The government can set minimal standards on pollution or in certain cases, such as for leaded gasoline, where any amount is too much, it can ban something outright.

In economic terms, it can also impose a tax at least equal to the cost imposed by the activity — for instance, taxing a fuel that creates pollution or taxing the pollution itself. This forces the producer to internalize the externality by accounting for the social cost and the economic cost and producing less of what is causing the externality.

Data breaches can cause significant negative externalities because typically the organization that was breached does not bear the full cost of the breach. Ashley Madison faced a massive class-action lawsuit, but in the end, it was settled for just $11.2 million (before legal fees), with each exposed user eligible for up to just $3,500, based on submitting valid claims. For instance, those who paid $19 to have their accounts deleted were eligible to have that amount refunded because their accounts were not actually deleted, but they received nothing more for the impact on their personal lives. Perhaps it is not so amazing that the website is still in business in this light.

Of course, if you do not expect to bear all the costs of a breach, then you may not make all efforts to prevent it, leading to a market failure of preventable data breaches with significant costs borne by innocent parties. Users in particular are usually left out of the picture. For instance, even if there is no immediate cost, sometimes the data breach can lead to identity theft in the long run, with little to no compensation. Even if users are able to recover money, the default is that they have to sue and show specific harm, and it is hard to link identity theft to a particular data breach.

Much of this situation arises from the fact that software vendors are not liable for damages caused by bugs or vulnerabilities. Take the example of a password manager, a program that creates unique and complicated passwords for a user for each site and then automatically fills it in when using that site. This is a good way for users to reduce risks because password reuse is common and a way for hackers who have stolen one password to enter multiple sites. By using a password manager, however, users are putting all their eggs in one basket: If the password manager is breached successfully — and there has been at least one breach scare — then the hacker can potentially get a user’s master password that enables access to all the rest of his or her passwords.

The cost to the user could be enormous, exposing them to theft, blackmail, and more. The cost of a breach to the password manager? Potentially not much at all. A review of the terms and conditions of a number of these warn users — in all capital letters — that they may only receive a refund of the cost of the software.

The impact of a data breach must be internalized to reduce the externalities of a breach, and government action is the best, if not only, way to achieve that. Laws that shift protection to users and third parties harmed by a breach would help, particularly in the case of fraud or negligence. This is clearly what happened in the auto industry with safety.

One might argue that this will raise the cost of providing services, particularly free online services, but there has to be a balance. A password manager, health website, and financial account all house important, sensitive data that deserve to be protected. And though that protection will cost money, the cost will not just protect the company and its users from a breach, but also deliver broader social benefits by increasing online trust — a nice positive externality. This is particularly important in cases in which the users cannot assess the cyber-risks themselves.

Conclusion

The car industry has come a long way in the past 50 years with respect to safety. While a number of features such as airbags had to be mandated due to industry resistance, today there is competition over safety features. Now cars feature not just front airbags, but side airbags, overhead airbags, airbags to protect passengers’ knees, and even an outside airbag to protect pedestrians from hitting the windscreen. In addition to protecting passengers if a crash occurs, there are features to help avoid crashes in the first place.

To help us find out more about safety, there are not just mandates, but tests. We can learn the ratings of cars we are about to buy, and car manufacturers unhappy about their safety ratings can improve them. There is also liability. In the wake of defects that resulted in a number of deaths, one airbag manufacturer recently went bankrupt after paying for recalls and settlements to victims’ families. This provides an incentive, even after the tests, to ensure that quality is maintained and defects are promptly reported and repaired.

This is the shift that must take place for cybersecurity.

First, there are tools that can increase security. Companies providing technology can adapt security features to human behavior, rather than hoping that humans adapt themselves to security features. That can include prompting better passwords, nudging users to update their software or making it automatic, and automatically encrypting data in devices and in transit. Much of this is already starting to occur; it should be encouraged and continue.

Many of these features will come with commercial software; at the same time, developers of the open standards at the heart of the Internet will have to continue addressing security issues, and support for critical open-source efforts will need to continue to be supplemented as needed. This will help address the public goods aspect of these standards and software.

Third parties can play a significant role in developing standards, conducting tests, and providing safety ratings to guide us in choosing the services and devices that we use online. We see this starting with ratings to help insurance companies assess risk, which needs to extend to providing ratings to assist users. Recently, the Swiss Digital Initiative introduced their Digital Trust Label, to provide more information to users about security, data protection, and other elements of an online service. This will help make information about security less asymmetric.

And finally, there is a role for governments, which can pass laws on data protection, provide mandates, ensure that data breaches are responsibly disclosed, and, where needed, explore when and how to impose liability on critical software so that organizations internalize the costs of security breaches to a greater extent.

These efforts will clearly cost time and money to implement and do not absolve all of us from learning — and implementing — safe online practices. However, the costs of not doing this are high as well — not just on the organizations and users directly impacted by a breach, but more broadly on digital trust, which is critical as more of our lives migrate online.


Michael Kende is a Senior Fellow and Visiting Lecturer at the Graduate Institute of International and Development Studies, Geneva, a Senior Adviser at Analysys Mason, a Digital Development Specialist at the World Bank Group, and former Chief Economist of the Internet Society. He has worked as an academic economist at INSEAD and as a U.S. regulator at the Federal Communications Commission. He is the author of “The Flip Side of Free,” from which this article is adapted.

Posted on
The MIT Press is a mission-driven, not-for-profit scholarly publisher. Your support helps make it possible for us to create open publishing models and produce books of superior design quality.