Renowned technology strategist Clara Durodié discusses the European Union’s proposed new ethics regulations on artificial intelligence (AI) technology.
- The ethics of artificial intelligence have been subject to debate for decades. But with AI tools becoming commonplace in a number of industries, the lack of clarity around what’s “right” and “wrong” can leave both innovators and businesses wondering what to do next.
- The European Union recently announced a set of proposed regulations that address the risks of various AI technologies.
- Technology strategist Clara Durodié shares her insights on the EU’s plans as well as their implications for highly regulated industries like financial services.
Concerns about the ethical implications of artificial intelligence are nothing new. Isaac Asimov published the stories that would become the classic novel “I, Robot” in science-fiction magazines between 1940 and 1950. Ever since, a future where AI runs amok continues to capture the popular imagination.
Yet the everyday reality of machine learning technologies is already here. And of course, there’s often significant daylight between ethics and the law.
Data scientists themselves frequently disagree on proper protocols for the ethical use of AI. But the visionaries (and the venture capitalists) of Silicon Valley tend to think the government should allow companies to develop emerging technologies unfettered by regulation. Consumer advocates usually disagree.
What does all this mean for AI-enabled applications for financial services and other highly regulated industries? Let’s take a virtual trip to Brussels and find out how the European Union plans to take on the challenge of ethical AI.
An April 21 proposal from the EU sets out a nuanced regulatory structure that includes outright bans on some AI and heavy regulation of other AI technologies it deems “high-risk.”
“At the end of the day, the data which we produce, which includes financial data, is our property,” says returning guest Clara Durodié, the author of “Decoding AI in Financial Services,” an AI tutor at the University of Oxford and the former chair of the U.K. Non-Executive Directors Board’s special committee on Best Practice for AI Adoption.
So any systems that learn from our data should be subject to scrutiny, right?
Two years toward ‘clarity’
The EU wanted to encapsulate European values: human rights, data privacy and the right for citizens “to be themselves and have some intimate space,” says Clara, who views the proposed regulations as welcome and necessary “to clarify a lot of gray areas and confusion.”
European member states are expected to finalize and ratify the regulations around 2023 or 2024. They allow for a transition period of 24 months to allow companies the time to adjust and align their internal policies for compliance.
But I wonder: Is 24 months enough?
“It’s never enough, quite frankly,” Clara admits. “Whether you give people 15 or 150 months, somehow things are left until the last minute. It’s a question of leadership –– planning in advance, preventing rather than dealing with a crisis.”
And these regulations, although they’re issued by the European Union, apply to everyone looking to market products that use AI technology to European citizens. So American companies should take note –– because there are quite a lot, Clara adds.
“But although they seem a lot, they bring much-needed clarity and structure to how these technologies and systems are built, deployed and maintained.”
Unacceptably risky business
The EU regulators took a risk-based approach that classifies AI users into four risk groups: “minimal,” “limited,” “high” and “unacceptable.”
The latter two are the primary focus of the proposed regulation, Clara explains.
“Unacceptable” technologies include the remote biometric identification of data subjects (people), as well as systems that are likely to cause physical or psychological harm through the use of subliminal techniques, or exploit the vulnerabilities of protected classes of people (like those under or over certain ages or who have disabilities). Under these plans, they will be banned outright.
And the EU doesn’t see these new rules as static. Regulations on high-risk AI systems will be reviewed, amended and/or extended on an ongoing basis.
That “gives some room for what people always say,” she says. “That for any piece of regulation, there are unplanned side effects.”
The big three high risks for finance
There are three areas of the proposed EU regulation that are of immediate relevance to financial services firms, as well as technology companies that serve the finance industry.
The following systems are permitted –– but firms must comply with strict, detailed obligations “around risk management, data quality, technical documentation, human oversight, transparency, robustness, accuracy and security,” says Clara.
AI applications designed to advertise vacancies; screen applications; and evaluate candidates all have been shown to include a number of socioeconomic, cultural, racial, ethnic or gender biases.
Credit worthiness or credit score
There’s “a lot of pushback, even in Europe, when it comes to listing credit scoring as a high-risk AI system,” Clara notes. “But what has become apparent is that when credit scoring systems are run, some of them incorporate ‘alternative data’ that has proven to be a blatant infringement of privacy and data protection laws.”
So-called “alternative data” like cash flow data and utility bill payments have been touted as a solution to racial bias in credit scoring in the U.S. But Clara thinks that in Europe, it may be a bit too similar to the kinds of “social scoring” we see in China, where minor infractions like jaywalking can affect citizens’ ability to access credit and even employment.
Monitoring/evaluating work performance and behavior
Clara refers to this category as “employee monitoring for compliance purposes or algorithmic management.”
A few years ago when Clara was writing her book, she was “triggered” to study this further when she saw the then-CEO of IBM, Ginni Rometty, speak at Davos.
“She was very keen to explain to the world how its latest AI system enables companies to predict when employees might leave their organizations,” Clara says. “It’s just surveillance packaged as staff retaining policies.”
As a veteran of a corporate environment herself, she finds it “very stressful to feel like one is being monitored” on the job. And while most workers in the U.S. basically expect that to be the case, she says Europeans think differently.
“Anywhere you go in Europe, no one would feel comfortable to be under that level of surveillance,” she says. That’s why this kind of AI technology is precisely the kind of thing the EU seeks to regulate –– because it’s counter to “European values.”
‘Frames’ of reference
Critics would have us believe that regulations like those the EU proposed will stifle innovation. But Clara flatly disagrees.
“I’m not against innovation,” she says. “Far from it. But I propose a different way to frame the narrative. Sometimes innovation happens when we are constrained.”
That’s the thesis of “Framers,” a new book by economist Kenneth Cukier and big-data experts Viktor Mayer-Schönberger and Francis de Véricourt, that Clara recommends to her clients and anyone interested in cognitive psychology.
Their idea is that human perspectives and prejudices –– our “frames” –– can be used as tools to help us make better decisions. (Among other topics, the authors discuss how the #MeToo hashtag reframed the perception of sexual assault and how New Zealand’s framing of Covid-19 as akin to SARS (as opposed to the seasonal flu) kept the nation largely safe from the pandemic.
To extend the analogy, Clara sees regulation as a “frame” for good.
“What other industry has reinvented itself more successfully than fintech?” she asks. “What is another industry that has reinvented itself so successfully with technology? It’s healthcare.”
Those are two highly regulated industries, she points out.
“I think this proposed regulation provides the minimum necessary to frame a business model going forward, and provides structure into how we should run the processes that build these systems and maintain them,” she says.
“I don’t believe this regulation will stifle innovation. Far from it. If anything, innovation will flourish in Europe.”
This article is based on an episode of Tech on Reg, a podcast that explores all things at the intersection of law, technology and highly regulated industries. Be sure to subscribe for future episodes.