Innovation

Using AI to allocate resources, randomisation enhances fairness

In this paper, the researchers explore when randomization can enhance fairness.

Organizations are increasingly using machine-learning models to allocate limited resources or opportunities, such as selecting job candidates for interviews or prioritizing patients for kidney transplants based on survival likelihood.

To ensure fairness in these models’ predictions, users often try to minimize bias by adjusting the features used in decision-making or calibrating the model’s scores. However, researchers from MIT and Northeastern University argue that these approaches may not fully address structural injustices and inherent uncertainties. In a new paper, they suggest that incorporating randomization into a model’s decisions can enhance fairness in certain contexts.

For instance, if multiple companies use the same machine-learning model to rank job candidates without any randomization, a deserving individual might consistently rank at the bottom across all companies due to how the model interprets answers from an online form. Introducing randomization can help prevent this, ensuring that one person or group isn’t perpetually denied opportunities like job interviews.

The researchers found that randomization can be particularly beneficial when there is uncertainty in a model’s decisions or when the same group consistently faces negative outcomes. They propose a framework for introducing a specific amount of randomization into a model’s decisions by distributing resources through a weighted lottery. This approach can enhance fairness without compromising the model’s efficiency or accuracy.

“Even if you could make fair predictions, should decisions about allocating scarce resources or opportunities be based solely on scores or rankings? As these algorithms scale, the inherent uncertainties in the scores can become more pronounced. We demonstrate that fairness may necessitate some degree of randomization,” says Shomik Jain, a graduate student at the Institute for Data, Systems, and Society (IDSS) and lead author of the paper.

This work builds on a previous paper where the researchers examined the harms that can occur when deterministic systems are used at scale. They found that using machine-learning models to deterministically allocate resources can amplify inequalities present in the training data, reinforcing biases and systemic inequities.

“Randomization is a valuable concept in statistics and, to our satisfaction, meets fairness demands from both systemic and individual perspectives,” Wilson says.

In this paper, the researchers explore when randomization can enhance fairness, drawing on the ideas of philosopher John Broome, who discussed the value of using lotteries to allocate scarce resources in a way that respects individuals’ claims.

A person’s claim to a scarce resource, like a kidney transplant, can be based on merit, deservingness, or need. For example, everyone has a right to life, and their claim to a kidney transplant may arise from that right, Wilson explains.

  • PRESS RELEASE

Leave a Comment