I Bought a Robot Cat for My Rabbit — and Fell Into the Weird World of Animal-Robot Research

What began as a TikTok experiment with my rabbit led me into a strange world of cyborg cockroaches, imposter fish, and the ethics of care.
By: Ericka Johnson

For a while, as part of a research project on care robots in Sweden, I had Paro the seal in my house. I had borrowed Paro from a roboticist colleague, and the seal had a longer-than-planned layover with me because of the COVID-19 pandemic.

Neighborhood children were often around and showed fleeting interest in the robot. Our free-range rabbit in the house, Topsey, was wholly unimpressed (“Topsey is sad because we are paying too much attention to the robot,” said one of the more empathetic children). I don’t think Topsey felt bad about the very small amounts of attention the robot was getting. I suspect that at times, Topsey was glad something else was attracting the kids’ attention.

This article is adapted from the book “How That Robot Made Me Feel,” edited by Ericka Johnson.

Curious about how other children might respond, I brought the seal over to a friend’s house. Her 11-year-old petted Paro as instructed, to trigger the various sensors integrated into its whiskers and body. She tried to be polite and make some positive comments and even asked a few questions about how it worked. The six-year-old watched his sister interact with the robot, and, like their cat, quickly lost interest.

About a year later, after having returned the seal to my roboticist colleague, my research project bought an orange tabby cat robot. This “Joy for All” companion robot, created by the company Ageless Innovation, is supposed to help people living with dementia feel calmer, offer a sense of companionship, and provide opportunities to give care by interacting with a lifelike “cat.” Our research team was going to use it for a pop-up exhibition on care robots at a local museum, but before that event, I used the tabby cat to create a TikTok page called Robot_meets_pet.

Over the course of a morning, I filmed while various pets met the robot. I started with the pandemic rabbit we already had at home. It was utterly disinterested and I ended up with heaps of boring footage. But TikTok is a platform that rewards very short video clips, so I eventually had enough material to put up a few segments that made it look like the rabbit and the cat were interacting.

Sometimes I had to hide pet food behind the robot’s ears or under its paws to get those clips, giving weight to the adage, “Don’t believe everything you see on social media.”

This having been so successful, I decided to expand my experiment to other pets and see how they interacted with the robot. My inclusion criteria were two: 1) Who do I know that lives close by and has a pet? 2) Do I know them well enough that I can ask them to let me film their pet with a robot for TikTok? These criteria led to a TikTok feed that included films of robot–animal interactions with two rabbits, three cats, and two dogs. I almost got permission to film a hamster, but the mother of the hamster’s family was trying to keep her nine-year-old off TikTok and thought it would be pedagogically difficult to argue that the hamster could be on it but not the child. I had to agree. And to be honest, by the time the hamster appeared as a possibility, I was getting a little bored with filming pet–robot interactions to upload on TikTok, addictive as the platform is claimed to be.

Now, you may legitimately be wondering what all this has to do with a scientific study of care robots. Bear with me…

For one thing, an obvious observation is that the baby seal and the cat robot are designed to be readable and intelligible to human ways of sensing and perceiving the world, and that these might be very different for animals. Not that the robots were particularly interesting to the children — or most of the grown-ups who came in contact with them, if I’m honest. But they were at least intelligible as a robotic seal and cat. No one questioned which animal the robot was trying to look like.

But I’m going to go out on a limb here and say that the pets didn’t even recognize it as an animal. With each pet, I filmed a lot of noninteractions to get the few seconds of “interaction” that I would then cut, put to music, and upload. (Full disclosure: Sometimes I had to hide pet food behind the robot’s ears or under its paws to get those clips, giving weight to the adage, “Don’t believe everything you see on social media.”)


I decided to try to make sense of the robot–pet interactions in my head and not just on TikTok, so I turned to Animal–Robot Interaction (ARI) literature. It struck me that some of the research questions and methods I encountered there were quite similar to the research I had encountered in much of the Human–Robot Interaction (HRI) literature about emotions and our relationship to robots.

If I were to make a sweeping generalization about the field, intelligibility is a primary concept in ARI, just as in HRI. For animal–robot interactions to work, robots must be able to read what the animal is thinking, feeling, and doing, and the animal must be able to understand cues presented to it by the robot.

This is, of course, not the same as the animal believing the robot is an animal, just like humans do not need to believe a humanoid robot is a human to interact successfully with it. Belief is assumed to be suspended in much HRI research. But in ARI, the cues from both the animal and the robot are meant to be intelligible. In a way, this is not much different from foundational understandings in human–robot interactions, strengthening the claim in many humanities and environmental fields that humans should analytically be considered an animal and that the distinction we make between them is an outdated (and dangerous) remnant of the Enlightenment, modernism, or colonialism.

The thought of cockroaches near me being controlled by a robot posing as one of their own, on behalf of a human, truly terrifies me.

Robotic imaginaries — the ways we envision what robots are and what they might do — also appear in ARI. One study constructed robot cockroaches that, through acting as “agent provocateur, were able to modulate group decisions from within.” The thought of cockroaches near me being controlled by a robot posing as one of their own, on behalf of a human, truly terrifies me.

Another robotic imaginary in ARI was engaged by research that implanted robotic devices in large groups of animals to try to manipulate the animals en masse. Here, for example, one finds work to regulate and direct bee behavior (individually and at the colony level) by a robotic device that modulates the hive environment with vibrations and heat. This reminded me that Paro and the cat robot both vibrated, in mechanical ways and not necessarily intentionally. But they did not give off heat. Would heat have helped make them intelligible to my neighborhood pets? Do the humans who interact with Paro and the cat robot wish for this?

I also discovered work with fish using movement cues from robots — a fishlike robot implanted into a group of fish (often zebra fish) — to try to get the fish to alter their schooling movements according to inputs from the robot fish; a robot that interacted with domestic chickens, both observing what the chickens were doing through visual and audio cues, and modulating chicken behavior, including producing imprinting; and research that showed how real rats could be trained to follow robot rats. The result of many of these experiments is an observable behavioral response in the animal, but the goal is successful integration, called “bioacceptance.” This distinction finds parallels in HRI, as when a person who is being directed by a humanoid robot does what the robot tells them to do versus accepting the robot as a companion.

Why would we want to study (and manipulate) the animal–robot relationship? Well, there are a wide range of reasons, from the use of robots to control animals in military and “security” contexts (my affective discomfort with that cockroach imaginary recurs as I write this), to the use of robots to control (and improve the lives of?) animals in industrial-scale agricultural practices, to the idea that robots could help protect endangered wildlife (presumably by monitoring poaching). As with the reasons given to study and develop the human–robot relation, the social implications and predicted impacts given for experimenting with the animal–robot interaction varied widely.

Many of these possible scenarios are in the very distant future, so it makes sense to think of them, too, as robotic imaginaries. Sometimes this imaginary would suggest that robots might make it easier to design controlled experiments of animal behavior, especially the social behavior of animals. With the robot, the researcher can control at least one element of a relation, both in the “controlled variable” sense of control and as in actually controlling the animal.

Controlling the animals is a subfield of ARI that uses cyborg versions of animals. While most of the robots used in ARI studies are regular, stand-alone (or swim attached to a robotic arm) robots, some of them are actually robotic components incorporated into an animal to control the animal’s nervous system, muscles, or antennae with electronics, producing so-called cyborg or biohybrid organisms from cockroaches, beetles, lizards, rats, pigeons, and so on.

For example, researchers have incorporated a speech translator module in the brain of a rat that generates electric stimuli in the rat brain and, thereby, controls its motion, translating human speech commands into electronic signals. The same thing has, in a simpler form, also been performed on carp. In addition to using these types of robots for animal behavior experiments, these biohybrid organisms can (in the imaginary, at least) be used for other human ends by working to produce a mobile sensor network for tasks like environmental monitoring or the location of survivors. Military uses of these cyborgs are also apparent in some of this research.

In many ways, the reasons given for ARI research may not be too far from some of the imaginaries we have with care robots for humans. We are told that they can help protect (often older) people by ensuring they are safe in their homes or care environment, or by governing their intake of medicine and food, or by engaging them in beneficial exercise or being social companions. And in ARI as in HRI, studying how people interact with the robots is different from knowing how they really feel as a result of that interaction.

Apart from biohybrid or cyborg robots, though, stand-alone robots are often used to try to communicate with or send signals to animals. This was a reflection that I thought might help me find potential answers to why the robots were so uninteresting to my neighborhood pets.

In contrast to the interactions I observed with the cat robot’s meetings with my neighborhood pets, it is possible to produce animal–robot interactions that engage the animal. Robot movement can teach finches to mimic the foraging motions of a robot finch, displaying what could be interpreted as mirroring, something many people do with their human conversation partners. Temperature can be used to induce chicks to imprint and bond with a robot. Is this a type of affinity? Love perhaps? And ducks can be herded in particular directions with a robot, possibly anthropomorphized as fear? Other studies have looked at how dogs interact with robots, when the robot is imitating a human in speech or movement (speech seemed to work better).


In a lot of the animal–robot interaction work I read, robot movement seemed to be a prioritized area of research. Sometimes these cues were also, experimentally, paired with audio/vocal cues, which made me wonder if the cut between a movement cue relying on visual sensing (an expanding and contracting vocal sac) and an audio cue (the sound from/associated with this) was more of a distinction made by us as human researchers than by the amphibian being sent the signals.

Robot movement is also a subarea of human–robot interaction research, which focuses on how robots are perceived by humans. Here one finds similar questions and goals, like increasing the usefulness and acceptance of pet companion robots by improving the robots’ touch engagement in different ways, especially in situations that are designed to build trust and empathy. Thus, parallels exist between how we imagine robots communicate with us and how they communicate with nonhuman others. Research on how humans respond to robotic arm movements, for example, tests the importance of minor changes in movement styles for acceptance. By extension, perhaps if the seal or the cat moved more like a seal or a cat, they would have appealed to the real pets in a different way.

Just because rats can learn to follow a robot rat doesn’t mean they think of it as a rat.

In one of the recent overviews of the field, however, it is noted that just because rats can learn to follow a robot rat doesn’t mean they think of it as a rat. We can observe animal behavior, but explaining what understandings and emotions are behind that behavior is a completely different matter. How an animal really feels when it is receiving and interpreting signals is impossible to say — only to impose.

Arguably, it would be a very big step for me to claim to know the emotional life of the pets I filmed (a step best left to children with no theoretical qualms about imposing anthropomorphical or epistemological leaps, like the six-year-old and his cat).

But one thing I can note is that the films on my TikTok did seem to generate a lot of emotion from the human viewers. There was an oddly large number of people who emoted with a heart (the TikTok version of a “like”) to the videos I posted. I quickly gathered over a thousand followers, and within an hour of an upload, hundreds of people had viewed my videos, dozens of which had felt impelled to give it a heart. I felt like a TikTok star! Until I realized how many followers real TikTok stars have. Then I felt like a middle-aged academic with an odd TikTok account.

Yet, somehow, in my unreasonable expectations of what I could generate with these TikTok videos, I had hoped that I would be able to start a conversation about the robot — maybe even about our robotic imaginaries and the relations we see (or don’t see) being made between the pets and the robot. I thought maybe the flood of responses would be a valuable, rich, body of text about how we felt about that relation, which I could mine to say something novel and intelligent about the robot–human–nonhuman–more-than-human–relation.

I was wrong.

The short videos I posted (some of which even asked questions like: “What is the rabbit thinking?” or “Is this love?”) generated a grand total of (at the time of writing) six comments, of which five were a series of happy emoji and one was a question about ownership of one of the cats. Of course, this says much more about the TikTok platform (and the subset of users directed toward my postings) than about the robot–pet relation. But, nonetheless, it did not generate a body of empirical material I could use to say anything about what we or pets think about the robot–pet relation.

One reflection made when reading studies about animals and robots is that the idea of a robot interacting with an animal (and pets, specifically) produced feelings in me. Reading the results of studies about the robot–animal relation triggered emotions that were mildly uncomfortable (sometimes not even mildly). In the articles that discussed the use of robots to manipulate large groups of animals (fish, birds, insects) with the obvious potential to be weaponized, or even just to lead to easier industrial fishing practices, a feeling of unease and sadness was triggered in me. And discussions about implanting electronics in the brains of rats and fish made me deeply uncomfortable about the use of animals in this research.

But perhaps I was most unsettled by the articles that talked about doing robot experiments with dogs, those that tried to test if dogs responded better to robots calling them or if they responded better if a human was in the loop. I felt a feeling of repulsion, sadness, and unease as I read about these studies. The dogs had been taught and trained to respond to us, as companions. I felt that trying to get the dogs to behave in a similar way with a robot was … sad? Disloyal? Unethical? Abusive? Something the fox would have advised the Little Prince against doing?

And I wondered if I would feel this way about the idea of leaving my aging mother to a robot companion, even one that she recognized as a soft, purring cat.

(Yes. I think I would.)


Ericka Johnson is a Professor of Gender and Society at Linköping University, Sweden; Director of the National Graduate School of the Wallenberg AI, Autonomous Systems, and Software Program—Humanity and Society (WASP-HS); and a Member of the Royal Swedish Academy of Sciences. She is the author of many books, including “A Cultural Biography of the Prostate” (MIT Press) and the editor of the book “How That Robot Made Me Feel,” from which this article is adapted.

This chapter builds upon research supported by the Wallenberg AI, Autonomous Systems and Software Program—Humanities and Society (WASP-HS) funded by the Marianne and Marcus Wallenberg Foundation. DNR MMW 2019.0151.

Posted on
The MIT Press is a mission-driven, not-for-profit scholarly publisher. Your support helps make it possible for us to create open publishing models and produce books of superior design quality.