Asking AI the Wrong Questions and Getting a Frightening Result Is Your Fault?

3 min read

AI has incredible promise for the future, but that future is compromised by a lack of critical thinking on the part of programmers and trainers.

Photo by Yuyeung Lau on Unsplash

AI has opened a world of possibilities much like a new Garden of Eden where we are the unknowing humans exploring an environment beyond our understanding. In this world we are creating, one central feature of it must be the fallibility of we humans. It is this flaw over which we must show power, growth, and new competence, and we must remember one thing; we cannot know the answers if we do not know the questions.

Questions seem straightforward in our minds, and our egos blanch at the thought that we may have inexplicably missed a step somewhere in our building process vis-a-vis algorithms. But missteps are exactly what we are facing, and humility or better critical thinking are required to avoid untoward results. We are experiencing a new period of evolution.

Critical thinking is not only a prerequisite to becoming adept at algorithms; it may have been lost somewhere in the mix. What is one method of increasing this necessary skill?

The RED Model’s critical thinking skills framework has three main indicators including (1) Recognize Assumptions; (2) Evaluate Arguments; and (3) Draw Conclusions. The RED Model’s critical thinking skills framework and its indicators are expected to assist in encouraging the development of critical thinking skills and measuring critical thinking skills.

An overview of the RED Model with examples of business tasks requiring critical thinking can be accessed here. The downloadable PDF also includes 50 ideas for improving your critical thinking.

In addition to this model, there is a downloadable chart with questions/prompts that guide approaching any issue or problem.

When an Algorithm Gets It Wrong

The belief exists that computers and their algorithms are superior to humans because they can process so much information so quickly — faster than we could in months or even years. Therein lies a major problem of our belief system and our bias regarding algorithms. When it’s wrong, the result can be catastrophic for some individuals, as we’ve seen.

The algorithm at issue, Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), was designed to assess a defendant’s risk of recidivism — that is, the potential risk that the defendant will commit a crime in the future.

An extensive analysis of the COMPAS methods and the outcomes was run by Politico. After multiple statistical analyses were performed on various portions of the algorithms as well as data sets that may have been used, the results were quite revealing.

As might have been expected, “Black defendants were twice as likely as white defendants to be misclassified as a higher risk of violent recidivism, and white recidivists were misclassified as low risk 63.2% more often than Black defendants.”

This is only one illustration of how misusing statistics can bring about life-changing calculations which mitigate against certain individuals.

Photo by Eva Blue on Unsplash

Valid or not, the COMPAS algorithm is used to provide quick, if not accurate, data on potentially violent offenders. It is a model for inefficiency, not efficiency. According to a presentation at a TED Talks by Peter Haas, the reason, again, for its use in criminal justice situations to determine future rates of violence, is the hoard of cases that must be dispatched quickly.

Because the algorithm is privately held (and used in about 13 states for criminal justice decisions), the source code is not available for inspection by anyone because of the trade secret law, raising concerns. Unbelievable as it may seem, no one knows what datasets were used to create COMPAS.

The criminal justice system is only one example where algorithms can cause untold havoc in people’s lives. Other instances include trying to get a loan for a home, a job interview, qualifing for government benefits, or even if they are stopped on the highway for a driving infraction.

Haas asked, “Would you want the public to be able to inspect the algorithm that’s trying to make a decision between a shopping cart and a baby carriage or a self-driving truck, in the same way that the dog/wolf algorithm was trying to decide between a dog and wolf?”

The latter distinction is notable since the algorithm training made an error identifying between a dog and a wolf. The reason was that in the training sessions only wolves in snowy backgrounds were shown. Any dog-like animal not in a snowy background was indicated to be a dog. Critical thinking was an issue here that was missing.

Clever Hans, 1904, Public Domain

Answers and Questions Asked

In her book, Atlas of AI, Power, Politics, and the Planetary Costs of Artificial Intelligence, Kate Crawford uses a well-known story of illusion and bias (the horse Clever Hans) when discussing AI.

“The story of Hans is now used in machine learning as a cautionary reminder that you can’t always be sure of what a model has learned from the data it has been given. Even a system that appears to perform spectacularly in training can make terrible predictions when presented with novel data in the world.”

Regarding facial recognition algorithms, Crawford questions the utility of incorporating criminal mug shots into the mix. The fact that mugshots are available to feed into datasets indicates a frenzy to create a dataset rather than to question what should go into it.

Frantically collecting images for databases has turned into a mindlessness that is never questioned because the programs demand and the programs receive what they demand.

Yet now it’s common practice for the first steps of creating a computer vision system to scrape thousands — or even millions — of images from the internet, create and order them into a series of classifications, and use this as a foundation for how the system will perceive observable reality. These vast collections are called training datasets, and they constitute what AI developers often refer to as “ground truth.”

Is the completed algorithm from the Multiple Encounter Dataset and others ever questioned and is there any weeding out of training images or data that may be in some way inappropriate to the mission? In fact, what is the mission and what biases have been incorporated into the collection? Is scraping the internet for images the most appropriate method of gathering data?

Could we even, at this stage, resolve these seminal issues? Who will be damaged by permitting these issues to go unresolved? What questions have not been asked and answered? Where have we failed?

Patricia Farrell Patricia Farrell is a licensed clinical psychologist in New Jersey and Florida in the United States, a published author, former psychiatric researcher, educator and consultant to WebMD. She specializes in stress and medical illness and has been in the field for over 30 years. Prior to becoming a psychologist, Dr. Farrell held a number of editorial positions in trade magazine publishing and newspaper syndication. Her interests include photography, computers and writing both fiction and non-fiction.

Leave a Reply

Your email address will not be published. Required fields are marked *