Given the proliferation of AI into our socio-technological environments, we as AI researchers should also ponder on how justice system can contribute (and be used) to ensure the inclusion and the rights of marginalized individuals, such as people with disabilities when it comes to the development and applications of AI systems.
In a way, the justice system of the Western world is designed to consider the power systems. It has been used to protect and provide for the marginalized groups of western society.
On the other hand, there are clear concerns regarding the justice system, such as the cost to start the legal process, and the difficult process to create laws that are both relevant and forward-thinking to protect people now and in the future.
For instance, how can we ensure that the white blind man can get the lawyer job he wants?
Moreover, a contrast should be made between “fairness” and “justice” frameworks as the current fairness framework focuses on equality and presumes universality, while the justice framework considers power systems (acknowledging the inherent inequality that comes with), equitable opportunity, and centers on the most marginalized.
Distributive Justice in Machine Learning (ML)
One could argue for the capabilities-based (capabilities in the sense of “the states of being free to. ..” ) metric for fairness as opposed to the resource-based metric, as capabilities are what matter to people, resources are the means.
To give a specific example, money to buy wheelchair might not be helpful if the environment is not wheelchair friendly, what it matters is the freedom to move.
Some ways to develop capability-based metrics in ML include, but are not limited to:
- Close collaboration with the community, leveraging methods such as user-centered/participatory design
- Generating a context-specific list of capabilities
- Equalizing capabilities across community members
A more crucial question to be asked is how do we build AI systems that serve people?
We, as a community, are too good at building standardized datasets and metrics that shield us from anything “subjective”.
Eventually, the development of such systems became a game with numbers.
The goal and purpose of such systems were lost in the process.
We should also examine critically about who we are serving and what kind of values are embedded in the systems we build.
For example, let’s consider VR games equipped with gaze-detection and face recognition to “train” people with ASD to have eye contact and respond with the “appropriate” level of emotion density!
Although it was declared as “assistive” technology for the neurodiverse, it carries strong ableism assumptions and the real audience of such technology is not the ASD community, yet the therapists and medical professionals who already have power and authority to “fix” people for better or worse.
Without a deep connection with the target community, the problem can arise even when we had fairness top of our minds. One example is the acclaimed best paper from ICML ’18, in which the authors claimed to contribute to ML fairness by training a sentence auto-completion model that performs as well on “African American English” as in “Standard American English”.
Ironically, none of the authors or the evaluators recruited on MT is African American, which brings questions on “who is asking for and benefiting from such system?” and “is this really a good/valuable experience for African Americans?”
It is widely acknowledged that marginalized people are underrepresented in the training and evaluation of AI systems, and thus are often have degraded experience with such systems.
A few potential solutions to overcome potential conflicts include at the user end differentiated privacy, on-device data, informed consent while at the system end mapping outliers to the center becomes crucial.
The AI community should also be cautious about the over-protection of marginalized individuals based on paternalism and arguing for the agency of such individuals to decide whether their data would be used in AI systems.
The focus for the AI community should be on “design with” rather than “design for” the marginalized communities.