Home Technology AI The Case for Making AI Human-Centric
Female cyborg head with abstract digital interface. Artificial intelligence and cyberspace concept. Double exposure

The Case for Making AI Human-Centric

3932
Thumb1
Given the proliferation of AI into our socio-technological environments, we as AI researchers should also ponder on how justice system can contribute (and be used) to ensure the inclusion and the rights of marginalized individuals, such as people with disabilities when it comes to the development and applications of AI systems.
In a way, the justice system of the Western world is designed to consider the power systems. It has been used to protect and provide for the marginalized groups of western society.
On the other hand, there are clear concerns regarding the justice system, such as the cost to start the legal process, and the difficult process to create laws that are both relevant and forward-thinking to protect people now and in the future.
For instance, how can we ensure that the white blind man can get the lawyer job he wants?
Moreover, a contrast should be made between “fairness” and “justice” frameworks as the current fairness framework focuses on equality and presumes universality, while the justice framework considers power systems (acknowledging the inherent inequality that comes with), equitable opportunity, and centers on the most marginalized.
Machine Learning

Distributive Justice in Machine Learning (ML)

One could argue for the capabilities-based (capabilities in the sense of “the states of being free to. ..” ) metric for fairness as opposed to the resource-based metric, as capabilities are what matter to people, resources are the means.
To give a specific example, money to buy wheelchair might not be helpful if the environment is not wheelchair friendly, what it matters is the freedom to move.
Some ways to develop capability-based metrics in ML include, but are not limited to:
    • Close collaboration with the community, leveraging methods such as user-centered/participatory design
    • Generating a context-specific list of capabilities
    • Equalizing capabilities across community members

Human-centered Goals

A more crucial question to be asked is how do we build AI systems that serve people?
We, as a community, are too good at building standardized datasets and metrics that shield us from anything “subjective”.
Eventually, the development of such systems became a game with numbers.
The goal and purpose of such systems were lost in the process.
We should also examine critically about who we are serving and what kind of values are embedded in the systems we build.
The man with the super device connected to him is working on the computer.
For example, let’s consider  VR games equipped with gaze-detection and face recognition to “train” people with ASD to have eye contact and respond with the “appropriate” level of emotion density!
Although it was declared as “assistive” technology for the neurodiverse, it carries strong ableism assumptions and the real audience of such technology is not the ASD community, yet the therapists and medical professionals who already have power and authority to “fix” people for better or worse.
Without a deep connection with the target community, the problem can arise even when we had fairness top of our minds. One example is the acclaimed best paper from ICML ’18, in which the authors claimed to contribute to ML fairness by training a sentence auto-completion model that performs as well on “African American English” as in “Standard American English”.
Ironically, none of the authors or the evaluators recruited on MT is African American, which brings questions on “who is asking for and benefiting from such system?” and “is this really a good/valuable experience for African Americans?”
It is widely acknowledged that marginalized people are underrepresented in the training and evaluation of AI systems, and thus are often have degraded experience with such systems.
A few potential solutions to overcome potential conflicts include at the user end differentiated privacy, on-device data, informed consent while at the system end mapping outliers to the center becomes crucial.
The AI community should also be cautious about the over-protection of marginalized individuals based on paternalism and arguing for the agency of such individuals to decide whether their data would be used in AI systems.
The focus for the AI community should be on “design with” rather than “design for” the marginalized communities.
Previous articleHow To Learn / Unlearn About An Industry? 5 Practical Suggestions For Entrepreneurs
Next articleESG Investing Has a Data Problem — Here’s Why That Matters
Ayse Kok
Ayse completed her masters and doctorate degrees at both University of Oxford (UK) and University of Cambridge (UK). She participated in various projects in partnership with international organizations such as UN, NATO, and the EU. She also served as an adjunct faculty member at Bosphorus University in her home town Turkey. Furthermore, she is the editor of several international journals, including those for Springer, Wiley and Elsevier Science. She attended various international conferences as a speaker and published over 100 articles in both peer-reviewed journals and academic books. Having published 3 books in the field of technology & policy, Ayse is a member of the IEEE Communications Society, member of the IEEE Technical Committee on Security & Privacy, member of the IEEE IoT Community and member of the IEEE Cybersecurity Community. She also acts as a policy analyst for Global Foundation for Cyber Studies and Research. Currently, she lives with her family in Silicon Valley where she worked as a researcher for companies like Facebook and Google.

LEAVE A REPLY

Please enter your comment!
Please enter your name here