In October last year, Saudi Arabia became the first country in the world to give a robot citizenship. When taking to the stage to announce “her” new status, Sophia said she was “very honored and proud for this unique distinction…It is historic to be the first robot in the world to be recognized with citizenship.”

Robots are considered as being autonomous systems. The term ‘autonomy’ refers to the capacity to legislate for oneself as well as to formulate, think and choose norms, rules and laws to follow. It encompasses the right to be free to set one’s own standards and choose one’s own goals and purposes in life. The cognitive processes that support and facilitate this typically entail the features of self-awareness, self-consciousness and self-authorship according to reasons and values. Autonomy in the ethically relevant sense of the word can therefore only be attributed to human beings.

If Sophia is named a citizen, then it naturally follows that she is awarded and afforded certain rights that must be respected. Moral responsibility refers to several aspects of human agency such as causality, accountability (obligation to provide an account), reactive attitudes such as praise and blame (appropriateness of a range of moral emotions), and duties associated with social roles. Human beings ought to be able to determine which values are served by technology, what is morally relevant and which final goals and conceptions of the good are worthy to be pursued. This cannot be left to machines, no matter how powerful they are.

With this in mind, Sophia has a right to self-determination, a right to be free from slavery, and many others. What would we do if Sophia committed a crime, wanted to get married, or somehow applied for asylum in another country? Given her digital nature, it is unclear whether Sophia could ever truly meet the residency requirements many aspirant citizens must meet.

Such debates on moral responsibility call for more systematic thinking and research about the ethical, legal and governance aspects of high tech-systems that can act upon the world without direct control of human users, to human benefit or to human detriment. This is a matter of great urgency.

(a) Human dignity

The principle of human dignity, understood as the recognition of the inherent human state of being worthy of respect, must not be violated by ‘autonomous’ technologies.

A relational conception of human dignity requires that we are aware of whether and when we are interacting with a machine or another human being, and that we reserve the right to vest certain tasks to the human or the machine.

(b) Autonomy

The principle of autonomy implies the freedom of the human being. This translates into human responsibility and thus control over and knowledge about ‘autonomous’ systems as they must not impair freedom of human beings to set their own standards and norms and be able to live according to them.

(c) Responsibility

The principle of responsibility must be fundamental to AI research and application. ‘Autonomous’ systems should only be developed and used in ways that serve the global social and environmental good, as determined by outcomes of deliberative democratic processes.

(d) Justice, equity, and solidarity

AI should contribute to global justice and equal access to the benefits and advantages that AI, robotics and ‘autonomous’ systems can bring. Discriminatory biases in data sets used to train and run AI systems should be prevented or detected, reported and neutralized at the earliest stage possible.

(e) Public Engagement

Key decisions on the regulation of AI development and application should be the result of democratic debate and public engagement. A spirit of global cooperation and public dialogue on the issue will ensure that they are taken in an inclusive, informed, and farsighted manner.

(f) Rule of law and accountability

Rule of law, access to justice and the right to redress and a fair trial provide the necessary framework for ensuring the observance of human rights standards and potential AI specific regulations. This includes protections against risks stemming from ‘autonomous’ systems that could infringe human rights, such as safety and privacy.

(g) Security, safety, and integrity

Safety and security of ‘autonomous’ systems materializes in three forms: (1) external safety for their environment and users, (2) reliability and internal robustness, e.g. against hacking, and (3) emotional safety with respect to human-machine interaction. All dimensions of safety must be taken into account by AI developers.

(h) Data protection and privacy

Both physical AI robots as part of the Internet of Things, as well as AI softbots that operate via the World Wide Web must comply with data protection regulations and not collect and spread data or be run on sets of data for whose use and dissemination no informed consent has been given.

In light of concerns with regard to the implications of ‘autonomous’ systems on private life and privacy, consideration may be given to the ongoing debate about the introduction of two new rights: the right to meaningful human contact and the right to not be profiled, measured, analyzed, coached or nudged.

(i) Sustainability

AI technology must be in line with the human responsibility to ensure the basic preconditions for life on our planet, continued prospering for mankind and preservation of a good environment for future generations.

Artificial intelligence, robotics and ‘autonomous’ systems can bring prosperity, contribute to well-being and help to achieve higher moral ideals and socio-economic goals if designed and deployed wisely. Ethical considerations and shared moral values can be used to shape the world of tomorrow and should be construed as stimulus and opportunities for innovation, and not impediments and barriers.

Sophia being a citizen represents something more sinister. Nobody is treating or acting like Sophia is a real citizen, yet that is where the real harm lies. If we can switch off our compassion and thoughts for fellow citizens, just as we do for Sophia, we might get in the habit of doing it to other humans.

There may be a time in the future when technology has advanced so greatly that we need to consider whether robots and AIs ought be granted citizenship — but today is clearly not that day. If we start insisting that robots have the same rights as people, this will make it that little bit easier to justify the inhumanity we commit against our fellow humans.