Home Technology AI Principles over Populism: A Good Start for Ethics in AI

Principles over Populism: A Good Start for Ethics in AI

0
765
Thumb1

Google’s declaration of principles for AI is a short but carefully worded text covering the main issues related to the uses of its technology. It is worth reading the document, given that it raises many about the future and the rules we will need to guide us as it evolves.

The regulation of artificial intelligence is far from a new subject and is being debated widely; and Google, as a leading player in the field, is simply laying out its position after a long process of reflection. The company had been working on the statement of principles for some time, while it continues to work in other areas of AI.

In the wake of revelations about Google’s involvement in Maven, most media have interpreted the company’s statement of principles somewhat simplistically, along the lines of a promise that its AI won’t be used to develop weapons or in breach of human rights, although it is clear that the document has much more far-reaching intentions. Weapons are mentioned only briefly, in a section entitled AI applications we will not pursue, limited to saying that the company will not help develop “Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people”. That said, it will continue its work “with governments and the military in many other areas. These include cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue.”

In the first place, the significance of such reflection in all areas, and doing so in a well-informed, realistic way, without summoning up images of killer robots or superior intelligence able to sweep annoying humans aside, AI will not be used for everyday purposes for many years. For the moment, much of the discussion is about applications that have more to do with deciding which products are offered to potential customers, pricing policies or detecting fraud, along with an increasing number of applications, all of which may be less exciting than killer robots, but still with major potential to get things wrong.

Among the most relevant points are “Be socially beneficial”; “Avoid creating or reinforcing unfair bias”; “Be accountable to people”; “Incorporate privacy design principles”; and “Be made available for uses that accord with these principles”, which implies preventing its use by those who do not respect them. Other specially important commitments include “Uphold high standards of scientific excellence” or “be built and tested for safety”.

These are far more important commitments than whether the company will develop weapons or not. Many of the problems raised by the rapid rate of technological development come not from the potentially harmful objectives, but from mismanagement, inadequate security and procedural errors in a world where not everybody has the best intentions.

Naivety is no longer an excuse in the context of technologies that can be used for harm, and Google reaffirms its commitment to avoiding this, a commitment that goes far beyond “Don’t be evil.” This, of course, does not mean the company won’t make mistakes, but the commitment to submitting to rigorous processes and to trying to avoid them at all costs is important.

Reflection on the ethical principles associated with the development of AI algorithms is important, and needs to take place in a reasoned manner. It makes no sense to involve those who do not understand machine learning and AI to be involved in the drafting of the ethical principles that will govern their future. This particularly applies to our politicians, many of whom are not qualified to comment on, and much less legislate on these issues. Those who do not understand the topic have a responsibility to learn about it or stay out of the debate.

Ethics in AI

One thing is for Google to ponder the ethics of AI: it is one of the main players in the area, applying it to all its products and is in the midst of an ambitious training program to teach all of its workforce on how to use it. It’s quite another for a government, a supranational body or any other political organization to do so, given that in most cases, knowledge of the subject is at best, superficial, and at worst, zero or alarmist. We’re going to see more and more discussion on this subject, but what interests me most is not the outcome, but the process and the intended consequences.

Asking questions about the future to avoid potentially negative or unwanted consequences can be useful, especially if done with Google’s rigor and discipline. Doing so based on unwarranted fears rooted in science fiction are more likely to get in the way of progress and humanity’s evolution: we need to guard against irrational fears, misinformation, and their close relatives, demagoguery and populism. Laying down meaningful principles about the development of artificial intelligence algorithms will be an important part of how our future plays out. AI is a question of principles, sure, but of well-founded principles as well.

Previous articleOne size fits all trading strategy?
Next articleWhat You Need to Know About Cryptocurrencies Before Investing
Ayse Kok
Ayse completed her masters and doctorate degrees at both University of Oxford (UK) and University of Cambridge (UK). She participated in various projects in partnership with international organizations such as UN, NATO, and the EU. She also served as an adjunct faculty member at Bogazici University in her home town Turkey. Furthermore, she is the editor of several international journals, including IEEE Internet of Things Journal, Journal of Network & Computer Applications (Elsevier), Journal of Information Hiding and Multimedia Signal Processing...etc. She has also played the role of the guest editor of several international journals of IEEE, Springer, Wiley and Elsevier Science. She attended various international conferences as a speaker and published over 100 articles in both peer-reviewed journals and academic books. Moreover, she is one of the organizing chairs of several international conferences and member of technical committees of several international conferences. In addition, she is an active reviewer of many international journals as well as research foundations of Switzerland, USA, Canada, Saudi Arabia, and the United Kingdom. Having published 3 books in the field of technology & policy, Ayse is a member of the IEEE Communications Society, member of the IEEE Technical Committee on Security & Privacy, member of the IEEE IoT Community and member of the IEEE Cybersecurity Community. She also acts as a policy analyst for Global Foundation for Cyber Studies and Research. Currently, she lives with her family in Silicon Valley and works for Google in Mountain View.

LEAVE A REPLY

Please enter your comment!
Please enter your name here