• Home  / 
  • AI
  •  /  A Treatise of Robotics

A Treatise of Robotics

The question “Can and should robots have rights?” consists of two separate queries:

  • ‘Can robots have rights?’ as a question about the capability of a particular entity.
  • ‘Should robots have rights?’ as a question that inquiries about obligations in the face of this entity.

These two questions invoke and operationalize a rather famous conceptual distinction in philosophy that is called the is/ought problem or Hume’s Guillotine. In A Treatise of Human Nature (first published in 1738) David Hume differentiated between two kinds of statements: descriptive statements of fact and normative statements of value. For Hume, the problem was the fact that philosophers, especially moral philosophers, often fail to distinguish between these two kinds of statements:

“For as this ought, or ought not, expresses some new relation or affirmation, ’tis necessary that it should be observed and explained; and at the same time that a reason should be given, for what seems altogether inconceivable, how this new relation can be a deduction from others, which are entirely different from it. [..] that this small attention would subvert all the vulgar systems of morality, and let us see, that the distinction of vice and virtue is not founded merely on the relations of objects, nor is perceived by reason (Hume 1980, p. 469).”

Everyone knows the two statements that answer our question. One says: Technology is a means to an end. The other says: Technology is a human activity. The two definitions of technology belong together. For to posit ends and procure and utilize the means to them is a human activity. The manufacture and utilization of equipment, tools, and machines, the manufactured and used things themselves, and the needs and ends that they serve, all belong to what technology is.” According to Heidegger, the presumed role and function of any kind of technology — whether it be a simple hand tool, or robot — is that it is a means employed by human users for specific ends. Heidegger terms this particular characterization of technology “the instrumental definition” and indicates that it forms what is considered to be the “correct” understanding of any kind of technological contrivance.

The instrumentalist theory offers the most widely accepted view of technology. It is based on the common-sense idea that technologies are ‘tools’ standing ready to serve the purposes of users.” Consequently, technology is only a means to an end; it is not and does not have an end in its own right.

The instrumental theory not only sounds reasonable, it is obviously useful. It is, one might say, instrumental for making sense of things in an age of increasingly complex technological systems and devices. Computer systems are produced, distributed, and used by people engaged in social practices and meaningful pursuits. This is as true of current computer systems as it will be of future computer systems. No matter how independently, automatic, and interactive computer systems of the future behave, they will be the products (direct or indirect) of human behavior, human social institutions, and human decision. According to this way of thinking, technologies, no matter how sophisticated, interactive, or seemingly social they appear to be, are just tools, nothing more. They are not — not now, not ever — capable of becoming moral subjects in their own right, and we should not treat them as such.

Although the instrumental theory sounds intuitively correct and incontrovertible, it has at least two problems. First, it is a rather blunt instrument, reducing all technology, irrespective of design, construction, or operation, to a tool or instrument. “Tool,” however, does not necessarily encompass everything technological and does not, therefore, exhaust all possibilities. There are also machines. As Marx (1977, p. 495) succinctly described it, picking up on this line of thinking, “the machine is a mechanism that, after being set in motion, performs with its tools the same operations as the worker formerly did with similar tools.”

Second (and following from this), the instrumental theory, for all its success handling different kinds of technology, appears to be unable to contend with recent developments in social robotics. At first glance, as Darling (2016, p. 216) writes, “it seems hard to justify differentiating between a social robot, such as a Pleo dinosaur toy, and a household appliance, such as a toaster. Both are man-made objects that can be purchased on Amazon and used as we please. Yet there is a difference in how we perceive these two artifacts. While toasters are designed to make toast, social robots are designed to act as our companions.”

In support of this claim, Darling offers the work of Sherry Turkle and the experiences of US soldiers in Iraq and Afghanistan. Turkle, who has pursued a combination of observational field research and interviews in clinical studies, identifies a potentially troubling development she calls “the robotic moment”: “We don’t seem to care what their artificial intelligences ‘know’ or ‘understand’ of the human moments we might ‘share’ with them…the performance of connection seems connection enough” (Turkle 2012, p. 9). In the face of sociable robots, Turkle argues, we seem to be willing, all too willing, to consider these machines to be much more than a tool or instrument.

We appear to be able to do it with just about any old mechanism, like the very industrial-looking Packbots that are being utilized on the battlefield. Soldiers form surprisingly close personal bonds with their units’ Packbots, giving them names, awarding them battlefield promotions, risking their own lives to protect that of the robot, and even mourning their death.

As soon as AIs (Artificial Intelligence) begin to possess consciousness and projects, it seems as though they deserve some sort of moral standing. If the robot was designed to have human-like capacities that might incidentally give rise to consciousness, we would have a good reason to think that it really was conscious.

A term like “consciousness” points out many different things to many different people. In fact, if there is any general agreement among philosophers, psychologists, cognitive scientists, neurobiologists, AI researchers, and robotics engineers regarding consciousness, it is that there is little or no agreement when it comes to defining and characterizing the concept. To make matters more complex, the problem is not just with the lack of a basic definition; the problem may itself already be a problem. Perhaps the trouble lies not so much in the ill definition of the question, but in the fact that what passes under the term consciousness as an all too familiar, single, unified notion may be a tangled amalgam of several different concepts, each inflicted with its own separate problems.

How can one know whether a particular robot has actually achieved what is considered necessary for something to have rights, especially because most, if not all of the qualifying capabilities or properties are internal states-of-mind? This is, of course, connected to what philosophers call the other minds problem, the fact that, as Haraway (2008, p. 226) cleverly describes it, we cannot climb into the heads of others “to get the full story from the inside.” As Kurzweil (2005, p. 380) candidly admits, “we assume other humans are conscious, but even that is an assumption,” because “we cannot resolve issues of consciousness entirely through objective measurement and analysis (science).”

According to another argument, robots can be seen as a property. No matter how capable they are, appear to be, or may become; we are obligated not to be obligated by them. It may be technically possible to create AI that would meet contemporary requirements for agency or patiency. Yet, even if it is possible, neither of these two statements makes it either necessary or desirable that we should do so.” In other words, it is entirely possible to create robots that can have rights, but we should not do so.

No matter how interactive, intelligent, or animated our AIs and robots become, they should be, now and forever, considered to be instruments or slaves in our service, nothing more. “We design, manufacture, own and operate robots,” Bryson (2010, p. 65) writes. “They are entirely our responsibility. We determine their goals and behaviour, either directly or indirectly through specifying their intelligence, or even more indirectly by specifying how they acquire their own intelligence. For designers, “thou shalt not create robots to be companions.” For users, no matter how interactive or capable a robot is (or can become), “thou shalt not treat your robot as yourself.” The validity and feasibility of these prohibitions, however, are challenged by actual data — not just anecdotal evidence gathered from the rather exceptional experiences of soldiers working with Packbots on the battlefield but numerous empirical studies of human/robot interaction that verify the media equation. In two recent studies (Rosenthal-von der Pütten et al. 2013 and Suzuki et al. 2015), for instance, researchers found that human users empathized with what appeared to be robot suffering even when they had prior experience with the device and knew that it was “just a machine.” To put it in a rather crude vernacular form: Even when our head tells us it’s just a robot, our heart cannot help but feel for it.

The problem here is not what one might think, namely, how the robot-slave might feel about its subjugation. The problem is with us and the effect this kind of institutionalized slavery could have on human individuals and communities. As de Tocqueville (2004) observed, slavery was not just a problem for the slave, it also had deleterious effects on the master and his social institutions.

The question of “Can and should social robots have rights?”, formulated in terms of — the is-ought problem — may generate different responses. This other question — “Can and should robots have rights?”- requires a reformulation of moral patiency in the first place. This outcome is consistent with the two aspects of robophilosophy: to apply ethical thinking to the unique challenges and opportunities of social robots and to permit the challenge confronted in the face of social robots to question and reconfigure ethical thought itself.

Comments

comments

About the author

Ayse Kok

Ayse has over 8 years of experience in the field of social, mobile and digital technologies both from a practitioner and from a researcher perspective. She participated in various projects in partnership with international organizations such as UN, NATO and the EU. Ayse also acted as an adjunct faculty member in her home town Turkey. Ayse attended various international conferences as a speaker and published several articles in both peer-reviewed journals and academic books. She completed her master and doctorate degrees at both University of Oxford and Cambridge in UK.


Want to stay on top of all things DDI? Subscribe!