Home AI Should AI Be Ethical At All?
Cyborg hand protecting earth on a black background

Should AI Be Ethical At All?

0
601
Thumb1

As with any technology, AI’s potential to help or harm us depends on how it is applied and overseen. Ethical AI is garnering much interest, but it’s not always clear what this refers to. A broad range of emerging issues have been identified as requiring ethical frameworks or principles in order to steer the development of AI in a socially beneficial manner, including:

  • AI safety: Ensuring that autonomous systems do not behave in ways that inadvertently harm society. • Malicious uses of AI: Guarding against the misuse of AI by malicious actors.
  • Data ownership and protection: Overseeing the use of personal data for AI systems.
  • Algorithmic accountability: Clarifying governance and responsibilities for the use of algorithms, such as in the case of automated decision systems.
  • Socio-economic impact: Managing social and economic repercussions of AI, such as increased inequality of wealth and power.

To build on these ideas, a working definition of ethical AI should be established: AI that is designed and implemented based on the public’s values, as articulated through a deliberative and inclusive dialogue between experts and citizens.

This definition should capture a number of necessary elements  to achieve deployment of AI technology in a manner that is beneficial to society over the long-term, has moral and political legitimacy, and hence is grounded in widespread popular consent. These are:

  • in both design and implementation, AI is guided by values above short term profit;
  • values should be based on our best understanding of society’s values; and,
  • the most effective methods for building a shared and considered set of societal values bring together citizens in deliberative and inclusive dialogue with subject experts, such as technologists and philosophers.

deployment of AI technology

One application that demonstrates this double-edged potential is the use of AI in automated decision systems. Automated decision systems refer to the computer systems that either inform or make a decision on a course of action to pursue about an individual or business.  It is important to examine the use of automated decision systems in the broader social and economic context, considering behavioural insights, cultural norms, institutional structures and governance, economic incentives and other contextual factors that have a bearing on how an automated decision system might be used in practice.

At present, these systems are typically used as part of a wider process of decision-making that involves human oversight, or a ‘human-in-the-loop’ (HITL). Iyad Rahwan of the MIT Media Lab describes the use of human operators in HITL systems as potentially powerful in regulating the behaviour of AI. He explains that HITL systems serve two functions: to identify mis-behaviour by otherwise autonomous systems and to take corrective action; and/or to be an accountable entity in case the systems misbehave. In the latter scenario, the human operator encourages trust in the system because someone is held responsible and expected to own up to the consequences of any errors (and therefore, is incentivized to minimize mistakes).

Rahwan builds on the concept of HITL, proposing the idea of ‘society-in-the-loop’ (SITL) systems that go beyond embedding the judgment of individual humans or groups in the optimization of AI systems to encompass the values of society as a whole. SITL systems do not replace HITL systems but are an extension of them; they incorporate public feedback on regulations and legislations rather than individual feedback on micro-level decisions. They are therefore particularly relevant when the impact of AI has broad social implications; for example, as is the case with algorithms that filter news, wielding the power to politically influence scores of voters. Society is expected to resolve the trade-offs between the different values that are embedded within AI systems (for example, as highlighted by Rahwan, trade-offs between security and privacy, or between different notions of fairness) as well as agree on which stakeholders should reap certain benefits and which should pay certain costs.

Drawing on the concepts of HITL and SITL systems, the decision-making context can be conceptualized as three tiers as seen in the figure below:

  1. the decision taker, who may or may not be human;
  2. the institution that is ultimately accountable for the decision;
  3. and the societal context in which that institution is operating.

decision taker

The decision taker

These include intrinsic factors, such as the individual’s own values and beliefs, and extrinsic factors such as the financial and social rewards or penalties faced by the individual as a result of the outcomes of the decision.

The institutional context

The goals of the institution, and the culture and internal incentives that determine how those goals are pursued, have a significant influence on the decision taker. Equally, the governance structure, transparency and accountability of the institution to wider stakeholders and society will in turn influence the institution’s internal goals, culture and incentive structures. Intermediating between the decision taker and the institution may be a suite of decision support tools such as internal training, manuals or guides, expert systems, or other tools that help the decision taker manage data and follow a rules or principles based process for reaching a decision.

The societal context

Finally, both the individual and the institution will be influenced by societal context in terms of hard factors such as laws and regulations, and softer ones such as cultural norms, moral and religious belief systems, and sense of social cohesion and solidarity.

The public’s doubts about AI have yet to seriously impede the technological progress being made by companies and governments. Nevertheless, perceptions do matter; regardless of the benefits of AI, if people feel victimized by the technology rather than empowered by it, they may resist innovation, even if this means that they lose out on those benefits. The problem may be, in part, that individuals feel decisions about how technology is used in relation to them are increasingly beyond their control. Therefore, the solution may be, in part, making individuals feel part of the decisions about how technology is used in relation to them.

“And turn not your face away from people (with pride), nor walk in insolence through the earth.” (Qur’an 31:18)

Previous articleUSD backed Cryptos – An overview
Next articleAll Things Cryptos.Forex.Stocks — 06/02/2018
Ayse Kok
Ayse completed her masters and doctorate degrees at both University of Oxford (UK) and University of Cambridge (UK). She participated in various projects in partnership with international organizations such as UN, NATO, and the EU. She also served as an adjunct faculty member at Bogazici University in her home town Turkey. Furthermore, she is the editor of several international journals, including IEEE Internet of Things Journal, Journal of Network & Computer Applications (Elsevier), Journal of Information Hiding and Multimedia Signal Processing...etc. She has also played the role of the guest editor of several international journals of IEEE, Springer, Wiley and Elsevier Science. She attended various international conferences as a speaker and published over 100 articles in both peer-reviewed journals and academic books. Moreover, she is one of the organizing chairs of several international conferences and member of technical committees of several international conferences. In addition, she is an active reviewer of many international journals as well as research foundations of Switzerland, USA, Canada, Saudi Arabia, and the United Kingdom. Having published 3 books in the field of technology & policy, Ayse is a member of the IEEE Communications Society, member of the IEEE Technical Committee on Security & Privacy, member of the IEEE IoT Community and member of the IEEE Cybersecurity Community. She also acts as a policy analyst for Global Foundation for Cyber Studies and Research. Currently, she lives with her family in Silicon Valley and works for Google in Mountain View.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

HTML Snippets Powered By : XYZScripts.com