Courses

foto-toon-2021
Toon Calders (Univ. of Antwerp, Belgium)

Introduction to Fairness in Machine Learning

Decisions made through predictive algorithms sometimes reproduce inequalities that are already present in society. Is it possible to create a data mining process that is aware of fairness? Are algorithms biased because humans are? Or is this the way machine learning works at its most fundamental level?
In this lecture I will give an overview of some of the main results in fairness-aware machine learning, the research field that tries to answer these questions. 

We will review several measures for bias and discrimination in data and models, such as demographic parity, equality of opportunity, calibration, individual fairness, direct and indirect discrimination. Even though for each of these measures strong arguments in favor can be found, we will show that they cannot be combined in a meaningful way. Next to these methods to quantify discrimination we also cover several “fairness interventions” aimed at making algorithms fair that were proposed in the last decade. These techniques include pre-processing techniques such as biased sampling, in-processing techniques that deeply embed fairness constriants in learning algorithms, and post-processing techniques to make trained models fair.

Sihem Amer-Yahia (CNRS, Univ. Grenoble Alpes)

Fairness on Online Labor Markets

Abstract

Online labor markets are increasinglybecoming a destination for work. These marketplaces include freelancing platforms such as Qapa and MisterTemp’ in France, and TaskRabbit and Fiverr in the USA. On those platforms, workers can find temporary jobs in the physical world such as moving furniture, or in the form of virtual micro-gigs such as helping with designing a website. I will present the results of a study of fairness on those platforms, and discuss the design of a model to study discrimination and fairness in the Future of Work.

Fairness in Rankings and Recommenders: Models, Methods and Research Directions

Abstract

We increasingly depend on a variety of data-driven algorithmic systems to assist us in many aspects of life. Search engines and recommender systems amongst others are used as sources of information and to help us in making all sort of decisions from selecting restaurants and books, to choosing friends and careers. This has given rise to important concerns regarding the fairness of such systems. In this work, we aim at presenting a toolkit of definitions, models and methods used for ensuring fairness in rankings and recommendations. Our objectives are three-fold: (a) to provide a solid framework on a novel, quickly evolving, and impactful domain, (b) to present related methods and put them into perspective, and (c) to highlight open challenges and research paths for future work.

sofia
Sophia Kypraiou (NGO Women at the Table EPFL, CH)

AI & Equality: Coding Toolbox

Abstract

Online labor markets are increasinglybecoming a destination for work. These marketplaces include freelancing platforms such as Qapa and MisterTemp’ in France, and TaskRabbit and Fiverr in the USA. On those platforms, workers can find temporary jobs in the physical world such as moving furniture, or in the form of virtual micro-gigs such as helping with designing a website. I will present the results of a study of fairness on those platforms, and discuss the design of a model to study discrimination and fairness in the Future of Work.

Ricardo Baeza-Yates (Northeastern Univ, USA)

Bias in the Web

The Web is the most powerful communication medium and the largest public data repository that humankind has created. Its content ranges from great reference sources such as Wikipedia to ugly fake news. Indeed, social (digital) media is just an amplifying mirror of ourselves. Hence, the main challenge of search engines and other websites that rely on web data is to assess the quality of such data. However, as all people has their own biases, web content as well as our web interactions are tainted with many biases. Data bias includes redundancy and spam, while interaction bias includes activity and presentation bias. In addition, sometimes algorithms add bias, particularly in the context of search and recommendation systems. As bias generates bias, we stress the importance of debiasing data as well as using the context and other techniques such as explore & exploit, to break the filter bubble. The main goal of this talk is to make people aware of the different biases that affect all of us on the Web. Awareness is the first step to be able to fight and reduce the vicious cycle of web bias. For more details see the article of same title in Communications of ACM, June 2018.

Carlos Castillo (Universitat Pompeu Fabra, Spain)

Disparate effects of recommender systems

Abstract:

This talk presents recent empirical results on the disparate effects of recommender systems. We consider two scenarios. The first is a real-world mobile app for the real estate market, where we can observe the response of users to the introduction of different recommender systems, and particularly whether various groups gain or loose visibility with each model update. The second is a link-based recommender system that can be used either for whom-to-follow recommendations or for what-to-watch-next recommendations; here the approach is simulation-based. In both cases, we can observe how different recommender systems can shape a platform and apportion visibility to different users/contents in ways that can drastically differ from one model to another. The talk describes joint work with David Solans, Francesco Fabbri, Yanhao Wang, Caterina Calsamiglia, Michael Mathioudakis, and Francesco Bonchi.

Jahna Otterbacher (Open Univ, Cyprus)

It’s about time…and perspective: A critical look at proprietary computer vision algorithms and the data practices behind them

Abstract:

Computer vision algorithms are recently under intense scrutiny for their tendency to discriminate against people of color and women. In this talk, we’ll take a critical look at the data practices behind the creation of such algorithms, and specifically, the use of paid micro-task crowdsourcing, in building image datasets. First, I will present examples of our ongoing work on auditing the “social behaviors” of popular, commercial image analysis services, in which we compare how algorithms and crowdworkers describe the same set of standardized, people images. Next, I present a follow-up study in which we replicated the same crowdsourcing task, prompting workers to describe the people images, 18 months later, during the difficult times of 2020. The results clearly illustrate the inherent temporal sensitivities, with significant variations surrounding the themes of racial identity and health. In concluding the talk, we’ll discuss ways of promoting more responsible generation and use of crowdsourced datasets.

 

Bettina Berendt (TU Berlin, Ger)

Algorithmic Fairness: On visibilities and invisibilities

Artificial Intelligence and other algorithms have been criticised for being biased and discriminating against groups and individuals, and many “de-biasing” methods have been proposed in the literature. Counteracting algorithmic bias has become a topic in recent laws and law proposals. In this talk, I propose to take a look “under the hood” in order to see how algorithmic systems can discriminate and what can be done against this. In particular, I will investigate the role of transparency and opacity in order to answer questions such as: does the “hiding” of information (such as demographics) safeguard against algorithmic discrimination, or can hiding even make things worse? Does “transparency” with respect to algorithms and data safeguard against algorithmic discrimination, or does this need to be complemented by other methods? And where does one need to “look” anyway to see evidence of discrimination and find ways of counteracting it?

Panel Discussion: Fairness beyond CS: Legal, ethical, societal aspects 

Julia Stoyanovich (New York Univ, USA)

Building Data Equity Systems

Abstract

Equity is a social concept that is about treating people differently depending on their endowments and needs to provide equality of outcome rather than equality of treatment.  In my talk I will present a vision for designing, developing, deploying, and overseeing data-intensive systems that consider equity as an essential requirement.  I will discuss ongoing technical work on fairness, diversity and transparency, and will place this work into the broader context of policy, education, and public outreach

Short quote:  “Equity is about treating people differently depending on their endowments and needs.  How do we build data-intensive systems with equity as an essential requirement?”

Contact us

gec.ws2022@gmail.com

Development and registration

Christos Tsapelas

Follow us on social media

Dissemination and social media strategy

Spyridoula – Alexia Giouroukou