Home Technology AI The Hidden Parameter of AI
Virtual reality projection. Human and conceptual cyberspace, smart artificial intelligence. Future science with modern technology.

The Hidden Parameter of AI

0
868
Thumb1

It is difficult to define concepts such as intelligence and mind as they all consist of ongoing changes to some extent. This is in contrast to other entities, such as biochemical compounds, as they can be defined by including a list of specific required features similar to mathematical concepts.

In the field of mathematics, it is possible to set certain parameters for definitions, which also enables a level of abstraction. In fact, this way of abstraction also serves as an invisible parameter underpinning the definition. Therefore, there is an implicit level of abstraction in each definition. In other words, rather than defining a concept X in terms of another concept Y in absolute terms (Kantian perspective), it would be defined contextually such as in the case of Euclidean geometry, quantum physics, or commonsensical perception.

Without a level of abstraction, things can get quite complicated easily. To give the previous example of defining ‘intelligence’, different level of abstractions may be used which may make sense for a different context, yet still finding an overarching absolute definition would be difficult. This problem was challenged by Turing (1950) when he developed an imitation game in which a dialogue conducted by computer interface was taken into account in order to measure the response time to eventually evaluate the required conditions for a computing system to be classified as being intelligent.

When it comes to developing the correct level of abstraction the following criteria should be taken into account:

  • (a) Interactivity: The individual and the environment (can) act upon each other.
  • (b) Autonomy: The individual can modify its own state without a direct response to the interaction from the environment.

Considering the ongoing debate of whether artificial agents (AA) would count as moral agents, these features would be a good starting point to consider. To start with, some argue that such a concept would be unacceptable given the following reasons:

  • The argument of intentionality: Although an AA has no intentional states, this would still not suffice for the existence of moral agenthood. Intentionality implies that related parties should be considered as being moral if they get involved in this ‘moral game’ intentionally. Whether they mean to play it, or they know that they are playing it, is not crucial in the first instance.
  • The argument of freedom: An AA may be considered as being free in the sense of being non-deterministic systems. Yet, this is not true as it could have acted differently given its capability for being interactive and autonomous. Once an agent’s actions are morally qualifiable, it is not so evident what else may be required in order to qualify as a moral agent, either unintentionally or unwittingly.
  • The argument of responsibility: It is often claimed that an AA cannot be held responsible for its actions. Yet, responsibility requires that the agent’s actions, are can be evaluated as being praiseworthy or blameworthy. Responsibility may often occur for some pedagogical, educational, social or religious end.

So, given these different arguments, it becomes challenging to respond to questions such as what behavior to classify as being acceptable in the cyber world or whom to held accountable for unacceptable behaviors of moral agents.

According to the traditional view, only software engineers are to be held morally accountable as in comparison to artificial agents, they are the only ones to exercise free will. Although this may sound like a perfectly legitimate view, there are some grey areas when it comes to a discussion of such accountability issues.

To start with, as software is mainly developed by teams; management decisions may be at least as important as programming decisions. In addition to this, requirements and specification documents also have an impact upon the finalized code. Moreover, in some cases, the efficacy of automation software may depend on additional features such as its interface or the system traffic. To give an example, software running on a system can interact in unpredictable ways or it can be downloaded at the click of an icon in such a way that the user has no access to the code. All these matters present some difficulties for the traditional view that only a human-being should be held as being accountable. Expanding the view that AAs can be held as being morally accountable seems to be on the horizon in the near future.

Last, but not least, all of these issues mentioned with regard to the more discourse in non-human context bring us back to consider the concept of ‘distributed morality’, according to which all of the social and legal agents can be qualified as being moral. The only ‘cost’ of a ‘mind-less morality’ approach is the extension of the class of moral agents to embrace AAs. Such a cost seems to be worth to be incurred given the fact that we move towards an advanced information society more rapidly than ever.

Previous articleInfographics Digest Vol. 12 – Understanding the World’s Closest Trade Relationship (U.S – Canada)
Next articleDDI Blockchain Weekly (October 1st – 7th)
Ayse Kok
Ayse completed her masters and doctorate degrees at both University of Oxford (UK) and University of Cambridge (UK). She participated in various projects in partnership with international organizations such as UN, NATO, and the EU. She also served as an adjunct faculty member at Bogazici University in her home town Turkey. Furthermore, she is the editor of several international journals, including IEEE Internet of Things Journal, Journal of Network & Computer Applications (Elsevier), Journal of Information Hiding and Multimedia Signal Processing...etc. She has also played the role of the guest editor of several international journals of IEEE, Springer, Wiley and Elsevier Science. She attended various international conferences as a speaker and published over 100 articles in both peer-reviewed journals and academic books. Moreover, she is one of the organizing chairs of several international conferences and member of technical committees of several international conferences. In addition, she is an active reviewer of many international journals as well as research foundations of Switzerland, USA, Canada, Saudi Arabia, and the United Kingdom. Having published 3 books in the field of technology & policy, Ayse is a member of the IEEE Communications Society, member of the IEEE Technical Committee on Security & Privacy, member of the IEEE IoT Community and member of the IEEE Cybersecurity Community. She also acts as a policy analyst for Global Foundation for Cyber Studies and Research. Currently, she lives with her family in Silicon Valley and works for Google in Mountain View.

LEAVE A REPLY

Please enter your comment!
Please enter your name here