Project BIAS

Bias and Discrimination in Big Data and Algorithmic Processing. Philosophical Assessments, Legal Dimensions, and Technical Solutions

How to create fair machine learning models?

Motivation

AI techniques based on big data and algorithmic processing are increasingly used to guide decisions in important societal spheres, including hiring decisions, university admissions, loan granting, and crime prediction. They are applied by search engines, Internet recommendation systems and social media bots, influencing our perceptions of political developments and even of scientific findings. However, there are growing concerns with regard to the epistemic and normative quality of AI evaluations and predictions. In particular, there is strong evidence that algorithms may sometimes amplify rather than eliminate existing bias and discrimination, and thereby have negative effects on social cohesion and on democratic institutions.

BIAS is an interfaculty research initiative composed of experts from philosophy, law, and computer science, bringing together epistemological and ethical, legal and technical perspectives.

Our shared research question is: How can we ensure that big data analysis and algorithm-based decision-making are unbiased and nondiscriminatory? To this end, we provide philosophical analyses of relevant concepts and principles, investigate their utilisation in pertinent legal frameworks, and develop technical solutions such as debiasing strategies and discrimination detection procedures.

For further information, please visit project's website.

Involved people from our group

  • Prof. Dr. Eirini Ntoutsi
  • MSc. Arjun Roy