2020: The Year of Robot Rights

A once-unthinkable concept is gaining traction and deserves our attention.
At what point might a robot be held accountable for the decisions it makes or the actions it initiates?
By: David Gunkel

Several years ago, in an effort to initiate dialogue about the moral and legal status of technological artifacts, I posted a photograph of myself holding a sign that read “Robot Rights Now” on Twitter. Responses to the image were, as one might imagine, polarizing, with advocates and critics lining up on opposite sides of the issue. What I didn’t fully appreciate at the time is just how divisive an issue it is.

For many researchers and developers slaving away at real-world applications and problems, the very notion of “robot rights” produces something of an allergic reaction. Over a decade ago, roboticist Noel Sharkey famously called the very idea “a bit of a fairy tale.” More recently, AI expert Joanna Bryson argued that granting rights to robots is a ridiculous notion and an utter waste of time, while philosopher Luciano Floridi downplayed the debate, calling it “distracting and irresponsible, given the pressing issues we have at hand.”

And yet, judging by the slew of recent articles on the subject, 2020 is shaping up to be the year the concept captures the public’s interest and receives the attention I believe it deserves.

David Gunkel is the author of “Robot Rights,” a philosophical case for the rights of robots.

The questions at hand are straightforward: At what point might a robot, algorithm, or other autonomous system be held accountable for the decisions it makes or the actions it initiates? When, if ever, would it make sense to say, “It’s the robot’s fault”? Conversely, when might a robot, an intelligent artifact, or other socially interactive mechanism be due some level of social standing or respect?

When, in other words, would it no longer be considered a waste of time to ask the question: “Can and should robots have rights?”

Before we can even think about answering this question, we should define rights, a concept more slippery than one might expect. Although we use the word in both moral and legal contexts, many individuals don’t know what rights actually entail, and this lack of precision can create problems. One hundred years ago, American jurist Wesley Hohfeld observed that even experienced legal professionals tend to misunderstand rights, often using contradictory or insufficient formulations in the course of a decision or even a single sentence. So he created a typology that breaks rights down into four related aspects or what he called “incidents”: claims, powers, privileges, and immunities.

His point was simple: A right, like the right one has to a piece of property, like a toaster or a computer, can be defined and characterized by one or more of these elements. It can, for instance, be formulated as a claim that the owner has over and against another individual. Or it could be formulated as an exclusive privilege for use and possession that is granted to the owner. Or it could be a combination of the two.

Basically, rights are not one kind of thing; they are manifold and complex, and though Hohfeld defined them, his delineation doesn’t explain who has a particular right or why. For that, we have to rely on two competing legal theories, Will Theory and Interest Theory. Will theory sets the bar for moral and legal inclusion rather high, requiring that the subject of a right be capable of making a claim to it on their own behalf. Interest theory has a lower bar for inclusion, stipulating that rights may be extended to others irrespective of whether the entity in question can demand it or not.

Rights are not one kind of thing; they are manifold and complex.

Although each side has its advocates and critics, the debate between these two theories is considered to be irresolvable. What is important, therefore, is not to select the correct theory of rights but to recognize how and why these two competing ways of thinking about rights frame different problems, modes of inquiry, and possible outcomes. A petition to grant a writ of habeas corpus to an elephant, for instance, will look very different — and will be debated and decided in different ways — depending on what theoretical perspective comes to be mobilized.

We must also remember that the set of all possible robot rights is not identical to nor the same as the set of human rights. A common mistake results from conflation — the assumption that “robot rights” must mean “human rights.” We see this all over the popular press, in the academic literature, and even in policy discussions and debates.

A common mistake, seen in the popular press and academic literature alike, is the conflation of “robot rights” and “human rights.”

This is a slippery slope. The question concerning rights is immediately assumed to entail or involve all human rights, not recognizing that the rights for one category of entity, like an animal or a machine, is not necessarily equivalent to nor the same as that enjoyed by another category of entity, like a human being. It is possible, as technologist and legal scholar Kate Darling has argued, to entertain the question of robot rights without this meaning all human rights. One could, for instance, advance the proposal — introduced by the French legal team of Alain and Jérémy Bensoussan — that domestic social robots, like Alexa, have a right to privacy for the purposes of protecting the family’s personal data. But considering this one right — the claim to privacy or the immunity from disclosure — does not and should not mean that we also need to give it the vote.

Ultimately, the question of the moral and legal status of a robot or an AI comes down to whether one believes a computer is capable of being a legally recognized person — we already live in a world where artificial entities like a corporation are persons — or remains nothing more than an instrument, tool, or piece of property.

This difference and its importance can be seen with recent proposals regarding the legal status of robots. On the one side, you have the European Commission’s Resolution on Civil Law Rules of Robotics, which advised extending some aspects of legal personality to robots for the purposes of social inclusion and legal integration. On the other side, you have more than 250 scientists, engineers and AI professionals who signed an open letter opposing the proposals, asserting that robots and AI, no matter how autonomous or intelligent they might appear to be, are nothing more than tools. What is important in this debate is not what makes one side different from the other, but rather what both sides already share and must hold in common in order to have this debate in the first place. Namely, the conviction that there are two exclusive ontological categories that divide up the world — persons or property. This way of organizing things is arguably arbitrary, culturally specific, and often inattentive to significant differences.

Ultimately, the question of the moral and legal status of a robot or an AI comes down to whether one believes a computer is capable of being a legally recognized person or remains nothing more than an instrument, tool, or piece of property.

Robots and AI are not just another entity to be accommodated to existing moral and legal categories. What we see in the face or the faceplate of the robot is a fundamental challenge to existing ways of deciding questions regarding social status. Consequently, the right(s) question entails that we not only consider the rights of others but that we also learn how to ask the right questions about rights, critically challenging the way we have typically decided these important matters.

Does this mean that robots or even one particular robot can or should have rights? I honestly can’t answer that question. What I do know is that we need to engage this matter directly, because how we think about this previously unthinkable question will have lasting consequences for us, for others, and for our moral and legal systems.


David Gunkel is Distinguished Teaching Professor of Communication Technology at Northern Illinois University and the author of, among other books, “The Machine Question: Critical Perspectives on AI, Robots, and Ethics,” “Of Remixology: Ethics and Aesthetics after Remix,” and “Robot Rights.”

Posted on
The MIT Press is a mission-driven, not-for-profit scholarly publisher. Your support helps make it possible for us to create open publishing models and produce books of superior design quality.