Thumb1

Artificial intelligence (“AI”) permeates our lives in numerous subtle and not-so-subtle ways, which presents both practical and conceptual challenges for society. Many of the practical challenges stem from the manner in which AI is researched and developed and from the basic problem of controlling the actions of autonomous machines. The conceptual challenges arise from the difficulties in assigning moral and legal responsibility for harm caused by autonomous machines, and from the puzzle of defining what, exactly, artificial intelligence means.

The difficulty in defining artificial intelligence lies not in the concept of artificiality but rather in the conceptual ambiguity of intelligence. Because humans are the only entities that are universally recognized (at least among humans) as possessing intelligence, it is hardly surprising that definitions of intelligence tend to be tied to human characteristics. The late AI pioneer John McCarthy, who is widely credited as coining the term “artificial intelligence,” stated that there is no “solid definition of intelligence that doesn’t depend on relating it to human intelligence” because “we cannot yet characterize in general what kinds of computational procedures we want to call intelligent.”

Definitions of intelligence thus vary widely and focus on a myriad of interconnected human characteristics that are themselves difficult to define, including consciousness, self-awareness, language use, the ability to learn, the ability to abstract, the ability to adapt, and the ability to reason. Today, the leading introductory textbook on AI, Stuart Russell and Peter Norvig’s “Artificial Intelligence: A Modern Approach” presents eight different definitions of AI organized into four categories:

  1. thinking humanly;
  2. acting humanly;
  3. thinking rationally;
  4. acting rationally.

Russell and Norvig cite the works of computing pioneer Alan Turing, whose writings predated the coining of the term “artificial intelligence,” as exemplifying the “acting humanly” approach. In his now-seminal paper Computing Machinery and Intelligence, Turing said that the question “Can machines think?” was “too meaningless to deserve discussion.” Turing instead focused on the potential for digital computers to replicate, not human thought processes themselves, but rather the external manifestations of those processes. This is the premise of Turing’s “imitation game,” where a computer attempts to convince a human interrogator that it is, in fact, human rather than machine. Other early approaches to defining AI often tied the concept of intelligence to the ability to perform particular intellectual tasks. As a result, concepts of what constitutes artificial intelligence have shifted over time as technological advances allow computers to perform tasks that previously were thought to be indelible hallmarks of intelligence.

McCarthy defined intelligence as “the computational part of the ability to achieve goals in the world” and AI as “the science and engineering of making intelligent machines, especially intelligent computer programs.” Russell and Norvig’s textbook utilizes the concept of a “rational agent” as an operative definition of AI, defining such an agent as “one that acts so as to achieve the best outcome or, when there is uncertainty, the best expected outcome.” From a regulatory perspective, however, the goal-oriented approach does not seem particularly helpful because it simply replaces one difficult-to-define term (intelligence) with another (goal). In common parlance, goal is synonymous with intention. Whether and when a machine can have intent is more a metaphysical question than a legal or scientific one, and it is difficult to define goal in a manner that avoids requirements pertaining to intent and self-awareness without creating an over-inclusive definition. Consequently, it is not clear how defining AI through the lens of goals could provide a solid working definition of AI for regulatory purposes.

Humans, bounded by the cognitive limitations of the human brain, are unable to analyze all or even most of the information at their disposal when faced with time constraints. They therefore often settle for a satisfactory solution rather than an optimal one, a strategy that economist Herbert Simon termed “satisficing.”  The computational power of modern computers (which will only continue to increase) means that an AI program can search through many more possibilities than a human in a given amount of time, thus permitting AI systems to analyze potential solutions that humans may not have considered, much less even attempted to implement.

As AI systems are not inherently limited by the preconceived notions, rules of thumb, and conventional wisdom upon which most human decision-makers rely, AI systems have the capacity to come up with solutions that humans may not have considered, or that they considered and rejected in favor of more intuitively appealing options.

The behavior of a learning AI system depends in part on its post-design experience, and even the most careful designers, programmers, and manufacturers will not be able to control or predict what an AI system will experience after it leaves their care. Thus, a learning AI’s designers will not be able to foresee how it will act after it is sent out into the world — but again, such unforeseeable behavior was intended by the AI’s designers, even if a specific unforeseen act was not. If legal systems choose to view the experiences of some learning AI systems as so unforeseeable that it would be unfair to hold the systems’ designers liable for harm that the systems cause, victims might be left with no way of obtaining compensation for their losses.

The risks created by the autonomy of AI encompass not only problems of foreseeability, but also problems of control. It might be difficult for humans to maintain control of machines that are programmed to act with considerable autonomy.

Despite the problematic features of AI, there is good reason to believe that legal mechanisms could be used to reduce the public risks that AI presents without stifling innovation.

Those who, like Elon Musk, believe that AI could pose an existential risk may favor more stringent government oversight of AI development. On the other hand, those who believe the public risks associated with AI to be manageable, and existential risk nonexistent, likely will oppose any government intervention in AI development.

We are entering an era where we will rely upon autonomous and learning machines to perform an ever-increasing variety of tasks. At some point, the legal system will have to decide what to do when those machines cause harm and whether direct regulation would be a desirable way to reduce such harm. This suggests that we should examine the benefits and drawbacks of AI regulation sooner rather than later.

Comments

comments

LEAVE A REPLY

Please enter your comment!
Please enter your name here