Guest Column | February 17, 2022

Fixing The Patient Data Collection Process To Solve Bias In Precision Medicine

By Jackie Baek and Vivek Farias, Massachusetts Institute of Technology

Goal Solution Concept GettyImages-622324734

We’ve seen a big push to apply machine learning tools to make predictions about what will happen — and, ultimately, decisions about what to do. However, a significant concern with every application of machine learning is fairness. If machine learning algorithms make decisions based on data, what happens if a specific group is under- or overrepresented in that data?

This is a big problem in personalized medicine where existing data overwhelmingly represents people of European descent. Health systems don’t explicitly set out to only collect data from this group, but for a variety of socioeconomic reasons, this is the demographic that tends to end up in these studies. Genes are heavily intertwined with one’s ancestry, and genetic information plays an important role in determining targeted treatments, so a machine learning model built on such data sets can be fundamentally biased. The treatment recommended for patients underrepresented in the data may not be optimal, and it could even cost them their life.

The same is true in other areas such as criminal justice, where machine learning models predict recidivism and impact jail time. And in recruiting, machine learning models look for potential new hires based on data about successful past hires, which can close the door for candidates who weren’t considered in the past based on their race, gender, or backgrounds.

A lot of work in the past five years focused on addressing fairness issues by fixing the algorithms. However, it is often the data — not the algorithms — that are the root cause of the issues. Therefore, we approached the problem from a different angle: to fix the processes that collect the data.

Exploration Is Necessary To Acquire Better Data

From prior work, we know that acquiring better data requires some amount of exploration. In other words, when we build an algorithm to make decisions, from time to time the algorithm should “try something new” that is not quite optimal based on the prior data. It is known that this sort of exploration is necessary in order to eventually have a better data set.

This sounds great, but it can be risky. In medicine, who wants to be the patient who “tries out” a new treatment? It turns out that if we are not careful, exploration can be conducted in a very unfair manner.

We retroactively examined a study that used machine learning to determine the optimal personalized dosage of warfarin, a blood thinner commonly used to treat blood clots. The optimal dosage varies widely among patients and an incorrect dose can have severe adverse effects. When we divided up the patients in that study based on race, we saw that one group – white people – benefited dramatically from exploration done by other groups. That is, non-white patients were assigned to “try something new,” and the result of such exploration benefitted white patients significantly. The vice versa was less significant; other groups did not benefit nearly as much from exploration by white patients.

We need to explore to build better data sets for precision medicine systems, but we also need to make sure exploration is shared across groups — or race in this case — in a fair, or “balanced,” way. There should not be one group of patients assigned to “try something new” a disproportionate amount compared to other groups.

Past approaches have developed algorithms for exploration, but they have not considered fairness in which patients are used to explore. In our study, we found that existing algorithms explore in a systematically unfair way. In particular, these algorithms deliberately explore with patients whose “outside option” is the worst; these are the patients who incur the smallest opportunity cost from trying something new. Now, even without exploration, these patients with the worst outside options are, in some sense, the most “disadvantaged” group of patients. Then, these existing algorithms further disadvantage this group by systematically exploring using only such patients. This results in a maximally unfair outcome: exploration is assigned only to patients who are the most disadvantaged to start with, and all other patients benefit from the exploration done on these patients.

A New Approach For Fair Exploration

To fix this problem, we went back to an influential work of John Nash from 1950 on cooperative bargaining, the problem of deciding how multiple parties should share a “surplus” gained from cooperation. In our setting, exploration generates a “surplus.” For example, when a patient tries a new medicine, the system learns about the efficacy of that medicine, and this information benefits everyone. Nash showed a solution to this problem that allocates, to each party, their “fair share” of the surplus. Applying Nash’s solution to our setting provides us with a fair solution on how exploration should be distributed amongst different groups of patients.

The result is a new algorithmic system for making decisions, such as assigning treatments to patients. The algorithm explores, so we can collect a better and more diverse data set over time. But it also explores in a way where each group takes on their “fair share” of exploration. Specifically, unlike prior approaches, the algorithm does not systematically choose any one type of patient to assign to explore. Patients from all groups are assigned to explore, though patients with better outside options are chosen less often, as these patients incur a larger opportunity cost from exploring. The algorithm continuously keeps track of how much exploration was done by each group of patients, and it ensures that every group benefits from the exploration done by other groups. Exactly how much each group benefits is determined by Nash’s solution for cooperative bargaining. Our algorithm keeps the total amount of exploration conducted similar to previous approaches, and therefore we can achieve a fair outcome without a major hit on system efficiency.

It’s not perfect, but this is a new and cutting-edge way to think about algorithmic design. Even better, it can be used in any area that uses machine learning based on data to make decisions for individuals.

About The Authors:

Jackie Baek is a Ph.D. student in the Operations Research Center at MIT, advised by Prof. Vivek Farias. Interested in the problem of algorithmic fairness, she completed her undergraduate studies at the University of Waterloo, where she majored in computer science and combinatorics and optimization.


Vivek Farias is the Patrick J. McGovern (1959) Professor and a professor of operations management at the MIT Sloan School of Management. His research focuses on the development of new methodologies for large-scale dynamic optimization under uncertainty, and the application of these methodologies to the design of practical revenue management strategies.