Link to Owner Dr. Robert B. Pamplin Jr.



In using algorithms, organizations often try to remove human flaws and biases from the process. But unfortunately, both the people who design these complex systems and the massive sets of data that are used have many historical and human biases built in.

PMG FILE PHOTO - Ron WydenThe potential for algorithms used throughout the U.S. health care system to perpetuate biases against patients is becoming disturbingly clear.

That's why U.S. Sen. Cory Booker and I are pressing federal agencies and five of the largest health care companies to explain what steps they're taking — or are planning to take — to address these troubling biases.

We do so because algorithms — rules established for use in calculations or other problem-solving — are increasingly embedded into every aspect of modern society, including the health care system. Organizations use automated decision systems, driven by technologies ranging from advanced analytics to artificial intelligence (AI), to organize and optimize the complex choices they must make each day.

The federal Centers for Medicare & Medicaid Services (CMS), which administers the nation's major health care programs, and commercial health insurers have begun to explore ways to incorporate algorithms that automate decisions. The goals of these algorithm applications include predicting health care needs and outcomes, targeting resources, improving quality of care and detecting waste, fraud and abuse.

CMS already employs algorithms in some programs and has indicated plans to expand its use of these emerging technologies. For example, on March 27, 2019, the Center for Medicare & Medicaid Innovation announced the Artificial Intelligence Health Outcomes Challenge to provide support for innovators to test how AI tools could be used to predict health care use and adverse events and inform innovative payment and service delivery models. And on October 21, 2019, CMS published a request for information to gather input on how the agency could use technology, such as AI, to conduct program integrity activities more efficiently.

In using algorithms, organizations often try to remove human flaws and biases from the process. But unfortunately, both the people who design these complex systems and the massive sets of data that are used have many historical and human biases built in. Without very careful consideration, the algorithms they subsequently create can further perpetuate those very biases.

Health care systems are not immune to the problem of bias. A study recently published in the journal Science found racial bias in one algorithm widely used in health systems throughout the country. This particular case of algorithmic bias used health care costs as a proxy for health care needs. The creators of the algorithm did not take into account that health care needs are not the only contributing factor to a person's level of health care costs. Other factors may include barriers to accessing care and low levels of trust in the health care system. These factors disproportionately hurt black patients. As a result, black patients were less likely to receive, or be referred for, additional services than were white patients due to their lower historical costs, even though black patients were typically sicker than their white counterparts at a given risk score developed by the algorithm.

According to the authors of this study, more than 45% of black patients captured under this algorithm would be flagged for additional help after this bias is addressed, an increase from nearly 18% under the algorithm as it was originally designed.

The findings of this study are deeply troubling, particularly taken in the context of other biases, disparities and inequities that plague our health care system. For instance, a 2016 study found most medical students and residents held the false belief that black patients tolerate more pain than white patients, which was related to less accurate treatment recommendations for black patients compared to white patients. It is well documented that certain illnesses have a significantly higher incidence among marginalized populations, like hypertension, which primarily affects black Americans. Black and American Indian/Alaska Native women are significantly more likely to die from complications related to or associated with pregnancy than white women, even when considering education level. Technology holds great promise in addressing these issues and improving population health. However, if it is not applied thoughtfully and with acknowledgement of the risk for biases, it can also worsen them.

I am pleased to learn the organization featured in the Science study appears to be taking steps to eliminate racial bias from its product. While a fix to this particular algorithm is a good step, this is just one of the many algorithms used in the health care industry. Congress and the administration must make a concerted effort to find out how widespread this issue is and move quickly to make lasting changes.

Ron Wyden, a Democrat, is the senior U.S. senator from Oregon.

You count on us to stay informed and we depend on you to fund our efforts. Quality local journalism takes time and money. Please support us to protect the future of community journalism.

Go to top
JSN Time 2 is designed by | powered by JSN Sun Framework