• Home  / 
  • AI
  •  /  The Moral Machine

The Moral Machine

Researchers across the globe are working extensively towards achieving an artificially intelligent system that can behave in an ethically and morally right manner. Morality, or the nature of distinguishing good from the bad, is an important human trait which researchers are now obsessively looking to infuse into machines. But, why are humans obsessing over it? Is that even a true human trait? Hasn’t history shown us that humans are capable of things worse than any AI could possibly do?

The concerns over morality often arise while talking about AI in areas like self-driving cars. Who dies in the car crash? Should it protect the passengers or passers-by? Some of the common questions asked when considering the moral decisions made by AI and machine learning to ‘crowdsource’ morality are, but not limited to:

  • Should the self-driving car run down a pair of joggers instead of a pair of children
  • Should it hit the concrete wall to save a pregnant woman or a child
  • Should it put the passenger’s life at risk in order to save another human?

Though it sounds like an interesting concept, how can the reliability of the machine based on crowdsourced morality be ensured? It couldn’t exactly be trusted for making complex decisions such as those around saving human lives. As experts believe, to decide on hundreds of millions of variations based on views of few millions couldn’t possibly be the best way. Professor James Grimmelmann from Cornell Law School had said, “Crowdsourced morality doesn’t make the AI ethical. It makes AI ethical or unethical in the same way that large numbers of people are ethical or unethical.”

Being truly abstract in nature, teaching morality to AI — which could be done best with measurable metrics — is next to impossible. In fact, considering instances, such as the one mentioned above, it is even questionable if humans have a sound understanding of morality that all of us can agree upon. ‘Instinct’ or ‘gut feeling’ takes over precedence in many cases. For instance, an AI player can excel in games with clear rules and boundaries by learning to optimize the score, but it has to work harder when it comes to mind games such as Chess or Go. We have, however, seen in past instances where Alphabet’s DeepMind was able to beat the best human players of Go. But in real-life situations, optimizing problems could be more complex.

For example, teaching a machine to algorithmically overcome racial and gender biases or designing an AI system that has a precise conception of what fairness is, can be a daunting task. Remember Microsoft’s AI chatbot that learnt to be misogynist and racist in less than a day? To teach AI the nuances of being ethically and morally correct is definitely not a cakewalk.

If we assume that a perfect moral system was to exist, we could derive this perfect moral system by collecting massive amounts of data on human opinions and analyze it to produce correct results. If we could collect data on what each person thinks is morally correct, and track how these opinions change and evolve over time and over generations, probably we could have enough inputs to train AI with these massive data-sets to be perfectly moral.

Though this gives a hope of building moral AI, since it relies on human inputs, it would be susceptible to human imperfections. Unsupervised data collection and analysis could in fact produce undesirable consequences and result in a system that actually represents the worst of humanity.

Despite fears by the likes of legendary scientists Stephen Hawking, arguing that once humans develop full AI, it will take off on its own and redesign itself at an ever-increasing rate, humans seem to be indulging in conversations around the importance of programming morality into AI. Tech giant Elon Musk has also time and again warned that AI may constitute a “fundamental risk to the existence of human civilization”.

Though these fears seem only reasonable, it cannot be denied that there is a need for more ethical implementation of AI systems, with a hope for engineers to imbue autonomous systems with a sense of ethics. It would be only ethically correct to have a moral AI that builds upon itself over and over again, and improves on its moral capabilities as it learns from previous experiences — just like humans do.

“Are those who have knowledge and those who have no knowledge alike? Only the men of understanding are mindful.” (Qur’an, 39:9)

Comments

comments

About the author

Ayse Kok

Ayse has over 8 years of experience in the field of social, mobile and digital technologies both from a practitioner and from a researcher perspective. She participated in various projects in partnership with international organizations such as UN, NATO and the EU. Ayse also acted as an adjunct faculty member in her home town Turkey. Ayse attended various international conferences as a speaker and published several articles in both peer-reviewed journals and academic books. She completed her master and doctorate degrees at both University of Oxford and Cambridge in UK.


Want to stay on top of all things DDI? Subscribe!