This article is more than 1 year old

Is your computer doctor secretly a racist? Two US senators want to find out the truth

Garbage in, garbage out

US Senators Ron Wyden (D-OR) and Cory Booker (D-NJ) are examining how federal agencies and healthcare companies are tackling algorithmic biases – after a recent study found that black patients were less likely to be referred to care programs by software than white patients, despite being sicker.

On Tuesday, Wyden (D-OR) and Booker (D-NJ) wrote a series of letters to the Federal Trade Commission and the Centers for Medicare and Medicaid Services, as well as the top five healthcare insurers, including UnitedHealth Group and Blue Cross Blue Shield. Each letter lists specific requests for information, asking if the agencies or companies have any policies, investigations, or internal tools to help them audit the impact of biases in data used to create algorithms.

These algorithms are increasingly being used to calculate an individual’s risk to certain diseases, before automatically deciding what level of care patients should be given. Such software is often guided by historical medical data that may contain implicit racial or gender biases, these are then carried forward in the decision making process to affect marginalized groups.

Wyden and Booker’s efforts were sparked by a paper published in Science last month. Eggheads at the University of California, Berkeley, the University of Chicago, and the Brigham and Women’s Hospital and Massachusetts General Hospital in Boston, studied a commercial algorithm widely used by the American healthcare system to decide whether to further treat people with complex medical problems.

They found that black patients were less likely than white patients to be recommended additional care by the software due to that old chestnut: money. The code was led to believe white people were less healthy and more at risk than black people because more money was spent on treating white people. Therefore, a black person had to be a lot sicker a white person to trigger the same level of additional care. Here's how the paper's abstract put it:

Less money is spent on Black patients who have the same level of need, and the algorithm thus falsely concludes that Black patients are healthier than equally sick White patients. Reformulating the algorithm so that it no longer uses costs as a proxy for needs eliminates the racial bias in predicting who needs extra care.

Ziad Obermeyer, first author of the paper and a researcher at UC Berkeley, further explained: "The algorithms encode racial bias by using health care costs to determine patient ‘risk,’ or who was mostly likely to benefit from care management programs.

"Because of the structural inequalities in our health care system, blacks at a given level of health end up generating lower costs than whites. As a result, black patients were much sicker at a given level of the algorithm’s predicted risk."

twitter

Oh dear... AI models used to flag hate speech online are, er, racist against black people

READ MORE

After the researchers altered the algorithm to take into account other variables, such as extra costs that could be cut by having access to other forms of preventative care, the percentage of black patients flagged for further medical help increased from 17.7 per cent to 46.5 per cent.

“In using algorithms, organizations often attempt to remove human flaws and biases from the process," Wyden and Booker said in a statement.

“Unfortunately, both the people who design these complex systems, and the massive sets of data that are used, have many historical and human biases built in. Without very careful consideration, the algorithms they subsequently create can further perpetuate those very biases.”

The FTC, the Centers for Medicare and Medicaid Services, and the five healthcare insurers addressed in the Senators’ letters have until 31 December to respond. ®

More about

TIP US OFF

Send us news


Other stories you might like