Home Technology AI Algorithms for Fairness

Algorithms for Fairness

1903
Thumb1
Algorithmic bias is on many people’s mind.
Software engineers care about algorithmic bias because we care about fairness. Yet, fairness is a complex social issue, and software engineers or developers can’t really assess or address algorithmic bias issues without first understanding the broader fairness considerations of the product they are powering with AI.
They must therefore look at fairness as a best practices process referred to as “fairness by design.”
Software engineers have a responsibility to ensure the products they build are fair. Fairness is not a new problem, nor is it primarily an AI or technology problem: it is a social problem, which needs to be addressed through critical and proactive thinking, and by following best practices in product design.
The fact that fairness is a social problem means that often in fairness discussions there is a trade-off regarding political issues: should fairness occur for one group rather than the other one?
Does fairness mean focusing on procedural consistency for everyone or striving for equal outcomes across some subgroups?
At their heart, these questions don’t seem to have universally right answers: they are political and reflect individual values. Because of this, the debates may never be fully resolved, so what’s needed is a process for making decisions transparently and with the necessary stakeholders at the table.
Fairness by design (FbD) refers to the process for making decisions explicitly, and with the necessary stakeholders at the table. It is a collaborative, dynamic process for surfacing the right hard questions, practically resolving those issues, and recording the process transparently and with accountability.
FbD requires critical thinking about product goals and the design decisions that go into implementation, placing particular emphasis on making conscious decisions about how and to whom the tech community wants to be fair.
By making a systematic process, it aims to embed fairness into the core of the product development process. A focus should be given onto developing this mindset for AI-driven systems, not just because that is the recent mandate for tech communities, yet because AI magnifies and makes more obvious the impact that tech products can have on fairness.
The aims of FbD are critical, but limited.
FbD does not tell product teams what fairness is; fairness means different things in different contexts.
As discussed above, how someone interprets fairness is, at its core, a social or political question and debate.
In some cases the stakes are not too high and can be resolved without much debate. However, for some issues such as “what is misinformation” or “what content should be moderated on our platform”, these questions will be hotly debated, and get to the core of some of the hardest questions that the companies and society have to face.
It is highly likely that the tech community will benefit from future regulation which can address these questions more holistically.
FbD won’t solve all those problems for the tech community. As mentioned above, FbD involves surfacing hard questions, providing a framework for resolving them in context, and recording those decisions for transparency.
Further, even if there is an agreement upon a fairness definition for some tasks, there exists also the possibility that there is some innovation out there that would allow the tech community to be fairer.
Asking someone to build the fairest system is like asking someone to build the fastest plane; there is always more that can be done with innovation and investment.
Building fair systems is an iterative and ongoing process, not a one-off checklist that gives a go/no-go with no further considerations needed should an individual pass. Although key improvements can be made to the fairness of systems, this should not mean that the system’s issues will be fully addressed.
The FbD should aim to achieve three things: surface the right (hard) questions about fairness, provide a process for resolving these questions, and record this process and the decisions involved.

Surface the right (hard) questions about fairness

How should engineers define fairness for this product? To whom should they be fair? How should they balance conflicting priorities? These decisions are always made when AI systems are developed, but they are often embedded in technical choices. FbD aims to make sure they are made reflectively and on the basis of explicit reasoning.

Provide a process for resolving these questions

There should be a bridge between stakeholders, such as product and policy. The best practice analysis and mitigation procedures offer a consistent way to inspect and analyze AI-driven systems, to align on an approach to fairness and to implement it efficiently.

Record the process and the decisions involved

This should enable better product design, as well as smoother and speedier product launches through precedents and case studies.

Having applied FbD will allow the tech community to build internal and external trust by transparently breaking down how to think about the impact of our work on fairness. In order to implement FbD the following steps can be implemented:
  1. Understand the product goal
  2. Align on a fairness definition
  3. Document the relevant system components
  4. Measure fairness at links between system components
  5. Mitigate sources of unfairness that were identified
  6. Incorporate fairness measurement and mitigation in future product development cycles
Let’s have a look at each of these steps in more detail:
Step 1: Understand the product goal
The reason to do this explicitly is that individuals will carry different assumptions about the goal without making those explicit, and discussions down the line about fairness could be clouded by confusion around lack of alignment on product goal.
Step 2: Align on a fairness definition
There are lots of fairness questions that could be addressed depending on whom to consider and how to group them. The focus should be on agreeing which fairness principles to pursue in a given product context.
Step 3: Document the relevant system components
There are four high-level components present in most AI systems: ground truth (system goal), labels (approximation of system goal used to train model), predictions, and interventions.
Step 4: Measure fairness at links between system components
For label fairness, this could be assessing whether labelers may be introducing bias into the system. For models, this would be assessing whether the algorithm itself could be introducing bias. There are emerging best practices for how to do this in several common use cases, but the specifics will be customized to a particular product and definition of subgroup and fairness.
Step 5: Mitigate sources of unfairness that were identified
If some issue was found in one of the design stages, it needs to be addressed. How this is done will be custom to the particular context and nature of the problem that was identified. In some cases there could be straightforward fixes such as collecting more representative data, while for others it could require more extensive research or ML model retraining and experimentation to understand what the root cause of the bias issue is.
Step 6: Incorporate fairness measurement and mitigation in future product development cycles
Up until this step the evaluation is just at a given point in time. As the system develops and its inputs change, the assessment of its fairness might change. This step sets a plan for ongoing fairness analyses.
This step is made easier if the infrastructure used to analyze fairness at Step 4 can be reused in the future.

In summary

FbD is a relatively new idea and the approach to executing it is still a work-in-progress.
Fairness depends on context: being fair to candidates for a college application is different to being fair to candidates for an engineering position, and there may be different norms or expectations around what kind of diversity or balance is expected.
These norms will change over time as society becomes aware of new fairness issues. Fairness in content moderation might require a different approach to fairness in facial recognition.
At its core, fairness is a social problem. To properly address fairness, the tech community needs to take a holistic view of fairness for AI-powered products, and they need to acknowledge that their choices related to fairness are not neutral.
While fairness is not primarily an AI problem, AI and other scalable technologies magnify and clarify the impact of our decisions on fairness. This journey is just at its beginning and it would hopefully support responsible development of AI systems.
Previous articleSocialism, crony capitalism and the prevalent disregard for individual disposition
Next articleOne Last Thought
Ayse Kok
Ayse completed her masters and doctorate degrees at both University of Oxford (UK) and University of Cambridge (UK). She participated in various projects in partnership with international organizations such as UN, NATO, and the EU. She also served as an adjunct faculty member at Bosphorus University in her home town Turkey. Furthermore, she is the editor of several international journals, including those for Springer, Wiley and Elsevier Science. She attended various international conferences as a speaker and published over 100 articles in both peer-reviewed journals and academic books. Having published 3 books in the field of technology & policy, Ayse is a member of the IEEE Communications Society, member of the IEEE Technical Committee on Security & Privacy, member of the IEEE IoT Community and member of the IEEE Cybersecurity Community. She also acts as a policy analyst for Global Foundation for Cyber Studies and Research. Currently, she lives with her family in Silicon Valley where she worked as a researcher for companies like Facebook and Google.

LEAVE A REPLY

Please enter your comment!
Please enter your name here