Home AI How to Design for Fairness with Machine Learning?
magnifying glass analyzing an electronic circuit brain

How to Design for Fairness with Machine Learning?

0
552
Thumb1

Machine learning is increasingly being used to predict individuals’ attitudes, behaviors, and preferences across an array of applications ranging from personalized marketing to precision medicine. Unsurprisingly, given the speed of change and ever-increasing complexity, there have been several examples of as if “machine learning was gone wrong.” To give a specific example, a chatbot trained using Twitter was shut down after only a single day because of its obscene and inflammatory tweets.

Bias can manifest itself in many forms across various stages of the machine learning process, including data collection, data preparation, modeling, evaluation, and deployment:

  • Sampling bias may produce models trained on data that is not fully representative of future cases.
  • Performance bias can exaggerate perceptions of predictive power, generalizability, and performance homogeneity across data segments.
  • Confirmation bias can cause information to be sought, interpreted, emphasized, and remembered in a way that confirms preconceptions.
  • Anchoring bias may lead to over-reliance on the first piece of information examined.

So how can the bias in machine learning be mitigated? Borrowing from the concept of “privacy by design” – as popularized by EU’s General Data Protection Regulation (GDPR)- a “fairness by design” strategy can be employed as follows:

  1. Pairing data scientists with a social scientist: 

Data scientists and social scientists tend to speak different languages. To a data scientist, “bias” has a particular technical meaning — it refers to the level of segmentation in a classification model. Similarly, the term “discriminatory potential” refers to the extent to which a model can accurately differentiate classes of data. On the other hand, when social scientists talk about bias or discrimination, they are more likely to be referring to questions of equity. Social scientists are generally better equipped to provide a humanistic perspective on fairness and bias.

Making data scientists collaborate with social scientists would facilitate a better awareness of demographic biases that might creep into the machine learning process.

  1. Annotating with caution: 

Unstructured data such as text and images often is generated by human annotators who provide structured category labels that are then used to train machine learning models. For instance, annotators can label images containing people, or mark which texts contain positive versus negative sentiments.

Although the quality of annotation is adequate for many tasks, human annotation is inherently prone to a plethora of culturally ingrained biases.

  1. Combining traditional machine learning metrics with fairness measures: 

The performance of machine learning classification models is typically measured using a small set of well-established metrics that focus on overall performance, class-level performance, and all-around model generalizability. However, these can be augmented with fairness measures designed to quantify machine learning bias. Such key performance indicators are essential for garnering situational awareness — as the saying goes, “if it cannot be measured, it cannot be improved.”

Important fairness measures include within- and across-segment true/false, positive/negative rates and the level of reliance on demographic variables. Segments with disproportionately higher false positive or false negative rates might be prone to over-generalizations.

  1. Balancing representativeness with critical mass constraints for sampling: 

For data sampling, the age-old mantra has been to ensure that samples are statistically representative of the future cases that a given model is likely to encounter. While at the surface this seems intuitive and acceptable — there are always going to be more- and less-common cases — issues arise when certain demographic groups are statistical minorities in a dataset.

Essentially, machine learning models are incentivized to learn patterns that apply to large groups, in order to become more accurate, meaning that if a particular group isn’t well represented in the data, the model will not prioritize learning about it. There might be  a need to significantly oversample cases related to certain demographic groups in order to ensure that a critical mass of training samples necessary to meet our fairness measures exist.

  1. Remembering de-biasing when building a model: 

Several methods for de-biasing exist:

  • One approach for de-biasing is to completely strip the training data of any demographic cues, explicit and implicit.
  • Another approach is to build fairness measures into the model’s training objectives, for instance, by “boosting” the importance of certain minority or edge cases.

Those models within demographic segments which are algorithmically identified as being highly susceptible to bias can be trained. For example, if segments A and B are prone to superfluous generalizations, learning patterns within these segments provides some semblance of demographic homogeneity and alleviates majority/minority sampling issues, thereby forcing the models to learn alternative patterns.

Fairness by design isn’t about prioritizing political correctness above model accuracy. With careful consideration, it can allow individuals and entities to develop high-performing models that are accurate and conscionable. Buying in to the idea of fairness by design entails examining different parts of the machine learning process from alternative vantage points, using competing theoretical lenses.

Making fairness a guiding principle in machine learning projects does not only result in fairer models — but also better ones, too.

Comments

comments

Previous articleAI forecast: ‘Disruption, then productivity’
Next articleBitmain – the Cryptomining behemoth aims for total dominance
Ayse Kok
Ayse completed her masters and doctorate degrees at both University of Oxford (UK) and University of Cambridge (UK). She participated in various projects in partnership with international organizations such as UN, NATO, and the EU. She also served as an adjunct faculty member at Bogazici University in her home town Turkey. Furthermore, she is the editor of several international journals, including IEEE Internet of Things Journal, Journal of Network & Computer Applications (Elsevier), Journal of Information Hiding and Multimedia Signal Processing...etc. She has also played the role of the guest editor of several international journals of IEEE, Springer, Wiley and Elsevier Science. She attended various international conferences as a speaker and published over 100 articles in both peer-reviewed journals and academic books. Moreover, she is one of the organizing chairs of several international conferences and member of technical committees of several international conferences. In addition, she is an active reviewer of many international journals as well as research foundations of Switzerland, USA, Canada, Saudi Arabia, and the United Kingdom. Having published 3 books in the field of technology & policy, Ayse is a member of the IEEE Communications Society, member of the IEEE Technical Committee on Security & Privacy, member of the IEEE IoT Community and member of the IEEE Cybersecurity Community. She also acts as a policy analyst for Global Foundation for Cyber Studies and Research. Currently, she lives with her family in Silicon Valley and works for Google in Mountain View.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

HTML Snippets Powered By : XYZScripts.com