Home Technology AI New Hopes for AI

New Hopes for AI

0
3448
Thumb1

The field of artificial intelligence (AI) encompasses a variety of technologies ranging from image recognition to robotics and autonomous vehicles which carry their own risks along with potential benefits.  AI can be generally defined as a category of technology with the capability of offering superior information processing skills. Given the infant stages of these technologies, it is no wonder that associated risks are also being recognized slowly.

When it comes to making an analysis of potential perils and risks of AI due to intentional misuse or accidental system failures it might be beneficial to make use of a structural perspective on its related risks. As AI both shapes and is being shaped its environment of deployment the lack of a structural perspective might result in a limited problem and solution set. 

It is crucial to understand the difference between agency and structure when it comes to adopting a structural perspective. While the misuse perspective focuses attention to changing the underlying incentives, motivations or access of a malicious person, the accident perspective focuses on developing the skills or attention focus of an engineer. Both approaches focus on the individual agent as a behavioral change might result in reduced levels of harms. Yet, the structural perspective starts with the premise that even after a behavioral change of the individual agent the risk level would often be unaltered based on the example of an avalanche which could be prevented by asking about the underlying cause of the slope becoming steep rather than asking of specific events that set it off.

In order to change our way of thinking about the risks and perils of AI, one of the main questions that needs to be asked is whether socio-economical or political structures could be shifted in such a way so that decision-makers would be pressured to make risky or costly decisions regardless of how much well-intentioned or competent they might be. Such a concern is mostly applicable to the realm of security in which a country’s missile, submarine and command and control systems could be tracked closely by means of improving data collection and processing capabilities. In addition to this, the well-known potential of AI to lead to monopolistic markets due to increasing returns of scale achieved by major tech companies and to labour displacement would also be linked to issues regarding structural mechanisms existing outside the scope of security.

Related Article:   Sensory Marketing with AI: A New Way to Stimulate Senses

As shown in these examples, AI deployment could be harmful for society given the malicious nature of these incidents even though there would occur no misuses of the technology. The negative consequences of AI would arise due to the changes in the nature of the environment which provides the setting for interaction among individuals. 

Another question that needs to be asked in order to shift our mindset into a structural way of thinking is whether the socio-economical and political structures can be approached as major reasons for risks arising out of AI including those cases which seem to be accidents or misuse. To give a recent example, in early 2018 Uber’s fatal car accident involving AI system was assumed to be caused due to its vision system, yet later on it became apparent that the emergency brake was turned off by engineers on purpose in order to not make their car look bad with its overly sensitive braking system in comparison to other competitors. So, there was a trade-off with safety given the higher management’s pressure on removing the model due to poor market prospects.

The lesson to be taken from this incident would be that rather than focusing on technical difficulties, the existing patterns of incentives within a particular system should also be taken into account. While an increase in the number and capabilities of engineers would help to some extent, it would not alleviate the issue unless structural pressures regarding to both internal (such as career concerns of employees) and external factors (such as market conditions) would be changed. In other words, technical changes and investments would not suffice to reduce the safety risks. Structural interventions can be done at the simplest level through regulative entities such as courts or legislatures so that all related stakeholders would have an awareness of related liabilities. Needless to say, in order for this to be accomplished, resources and competency on part of regulatory bodies should be available. 

Given the fast-paced proliferation of technologies into our daily lives, it would be hard to make a prediction of both potential harms and benefits of AI technologies. Nevertheless, the structural perspective would open up new ways of thinking about the risks and perils of AI along with other technologies.

Related Article:   Alexa, Tell Me Where You’re Going Next

Although such a shift in mindset might take some time to get used to adopting it this process could be made faster by taking into account the following aspects:

  • The community of AI policy experts should be expanded by inviting social scientists from different fields in order to gain their insights on how negative outcomes could be reverted by thinking differently. Adopting a structural perspective would require taking such an approach.
  • More time should be devoted to how to develop better collective norms and entities regarding the field of AI as recent efforts only focus on a few countries and organisations. Taking a unilateral approach would not work when it comes to facing crucial risks arising from AI. Again, taking a structural approach would necessitate coordinating all stakeholders’ actions for providing a better policy guidance in AI.

The fact that many of the risks in the field of AI are related to structural causes means that a collective action is required within both the domestic and international arena. Such a collective effort would only be realized when leaders become aware of the fact that most of the structural risks embed collective risks that require much efforts from all parties which might eventually lead to mutual gains. Gaining this awareness depends on realizing that in today’s complex world, everything including nations’ fates are interdependent. This might sound a difficult task to accomplish, yet it also provides some hope in improving AI for the better development of humanity.

Previous articleAre We Ready for Military AI?
Next articleBest Coding Languages to Learn in 2019
Ayse Kok
Ayse completed her masters and doctorate degrees at both University of Oxford (UK) and University of Cambridge (UK). She participated in various projects in partnership with international organizations such as UN, NATO, and the EU. She also served as an adjunct faculty member at Bosphorus University in her home town Turkey. Furthermore, she is the editor of several international journals, including those for Springer, Wiley and Elsevier Science. She attended various international conferences as a speaker and published over 100 articles in both peer-reviewed journals and academic books. Having published 3 books in the field of technology & policy, Ayse is a member of the IEEE Communications Society, member of the IEEE Technical Committee on Security & Privacy, member of the IEEE IoT Community and member of the IEEE Cybersecurity Community. She also acts as a policy analyst for Global Foundation for Cyber Studies and Research. Currently, she lives with her family in Silicon Valley where she worked as a researcher for companies like Facebook and Google.

LEAVE A REPLY

Please enter your comment!
Please enter your name here