• Home  / 
  • AI
  •  /  The Clash of Automatons

The Clash of Automatons

As AI (Artificial Intelligence) continues to progress and businesses across the globe benefit from its capabilities, it’s important to ensure that the technology is being harnessed for good, to create a better, fairer society. AI systems are already superior to humans in certain tasks such as image recognition, data analysis and problem-solving tasks. These advances present a wealth of ethical questions surrounding biases that could appear in the data, security issues, and potential consequences if systems are hacked or used irresponsibly. There are several ‘guidelines’ for ethical practices of AI such as the way data is handled and the processes developers should go through when creating a product, but there are still grey areas which are a cause for concern.

Exclusions are an inescapable part of any moral/legal system. One of the enduring concerns of ethics is distinguishing between “who” is deserving of moral/legal consideration and “what” is not.  What is most needed right now are philosophers who are not afraid to get their hands dirty by writing code and making things, and developers who are able to formulate and pursue the important and difficult philosophical questions. These professionals should try to occupy that liminal zone situated in between what C. P. Snow called the “two cultures.”

Whether we recognize it as such or not, we are in the midst of a robot invasion. The machines are now everywhere and doing virtually everything. We chat with them online, we play with them in digital games, we collaborate with them at work, and we rely on their capabilities to manage many aspects of our increasingly complex data-driven lives. Consequently, the “robot invasion” is not something that will transpire as we have imagined it in our science fiction.

Although considerable effort has already been expended on the question of AI, robots, and responsibility; the other question, the question of rights and legal status, remains conspicuously absent or at least marginalized. In fact, for most people, the notion of robots having rights is unthinkable.

Basically all moral/legal systems need to define who counts as a legitimate subject and what does not. Initially, who counted typically meant other (white) men. Currently, we stand on the verge of another fundamental challenge to moral thinking. This challenge comes from the autonomous, intelligent machines of our own making, and it puts in question many deep-seated assumptions about who or what constitutes a moral/legal subject. The practice of ethics has developed in such a way that it continually challenges its own restrictions and comes to encompass what had been previously marginalized or left out—women, children, foreigners, animals, and even the environment. The way we address and respond to this challenge is going to have a profound effect on how we understand ourselves, our place in the world, and our responsibilities to the other entities encountered here.

It is an already occurring event with machines of various configurations and capabilities coming to take up positions in our world through a slow but steady incursion. As these various mechanisms take up increasingly influential positions in contemporary culture—positions where they are not necessarily just tools or instruments of human action but a kind of interactive social entity in their own right—we will need to ask ourselves some rather interesting but difficult questions:

  • At what point might a robot, algorithm, or other autonomous system be held accountable for the decisions it makes or the actions it initiates?
  • When, if ever, would it make sense to say, “It’s the robot’s fault”?
  • Conversely, when might a robot, an intelligent artifact, or other socially interactive mechanism be due some level of social standing or respect?

Answers to questions such as how to ensure AI in robotics is used for a positive impact typically target the technological artifact and address its design, development, and deployment. Yet, there is another aspect and artifact that needs to be considered and dealt with here, and that is law and ethics. Our moral/legal systems (for better or worse) operationalize a rather restrictive ontology that divides the world into one of two kinds of entities: persons and property. In the face of advancements in AI and robotics, the question that will need to be asked and answered is as follows:

  • Should robots be recognized just as a property that can be used and even abused without further consideration?
  • Or are they (or should they be considered) a kind of “person” with rights and responsibilities before the law?

We are now at a tipping point where things could go one way or the other. On the one hand, we need to come up with ways to fit emerging technology to existing legal and moral categories and terminology. On the other hand, we will need to hold open the possibility that these entities might not fit either category and therefore will require some new third alternative that does not (at least at this time) even have a name. So, while technologies advance at something that seems to be approaching light speed, related laws slowly evolve at pen and paper speed. Ensuring positive impact means figuring out a way to contend with this difference.

There’s so much discussion around the privacy and security issues that come along with the application of AI systems. Yet, addressing this matter effectively will require a coordinated effort on the following three fronts:

  • First, service providers and manufacturers of devices need to compose and operate with clearly written Terms of Service (ToS) and End User Licensing Agreements that are transparent about what kind of data is collected, why it is harvested in the first place, and how it is and/or can be used. Unfortunately, many of these documents are poorly written, inconsistently applied, and difficult to read without a law degree.
  • Second, users need to know and fully appreciate what they are getting into. Too many of us simply ignore the click “agree,” and then share our information without a second thought as to what is being given away, at what price, and with what consequences. There is a pressing need for the so called “media literacy,” and this fundamental training—ostensible the skills and knowledge necessary to thrive in a world of increasing digital intervention and involvement—needs to begin at an early age.
  • Finally, mediating between users and providers there must be an outside third party that can ensure a level playing field and redress existing imbalances in power. Some form of regulation will be necessary to ensure that the rules of the privacy game are and remain fair, equitable, and just.

What is needed is not a one-stop-shop “ethics czar” but an institutional commitment to ethics across the enterprise, overseen and administered by an interdisciplinary group of experts, like the institutional review boards (IRB) that have been in operation at universities, hospitals, and research institutions for decades. The moral/legal opportunities and challenges that arise in the wake of emerging technology are complex and multifaceted. Tech professionals have a better chance of identifying potential problems, devising workable solutions, and anticipating adverse side-effect when there are diverse perspectives and approaches in the mix. For this reason, “a commons standard,” although a tempting solution, is less valuable and viable than productive differences and conflicting viewpoints. There is no one right way to address and resolve these important questions. Dialogue, debate, and even conflict are a necessary part of the process.

Comments

comments

About the author

Ayse Kok

Ayse has over 8 years of experience in the field of social, mobile and digital technologies both from a practitioner and from a researcher perspective. She participated in various projects in partnership with international organizations such as UN, NATO and the EU. Ayse also acted as an adjunct faculty member in her home town Turkey. Ayse attended various international conferences as a speaker and published several articles in both peer-reviewed journals and academic books. She completed her master and doctorate degrees at both University of Oxford and Cambridge in UK.


Want to stay on top of all things DDI? Subscribe!