Health systems look to advance health equity with AI but risk unintended consequences without a careful approach

Health systems look to advance health equity with AI but risk unintended consequences without a careful approach

What’s trending:  Leaders who apply the right scrutiny and risk mitigation can create more equitable healthcare 

Artificial intelligence (AI) started transitioning from hype to substance over the last year. Healthcare organizations can now use AI to rapidly generate insights from an increasing breadth and depth of data sources. But there are also associated risks, including the potential to exacerbate inequities experienced among communities that are under-resourced or have been historically marginalized. 

The challenge now facing US health systems is how to best harness AI’s transformative potential without creating unintended, adverse consequences for their patient populations. 

Why it matters 

AI relies on data and advanced mathematical models to answer a question or complete a task. “That means there is inherent risk that the data or model may be biased or unrepresentative of the population it addresses,” authors wrote in a new Chartis article on AI and health equity. “Because AI and the drivers of inequities often operate in ways unseen, the intersection of the two is an especially dangerous space if not sufficiently scrutinized.” 

To ensure appropriate scrutiny, health system leaders can start by focusing on these three questions: 

1. Does the data we are using accurately represent the community we intend to serve? Health systems need to plan how they will avoid using AI built on biased data. Studies have highlighted how bias is often present in medical and demographic data and the algorithms and models that use this data to inform diagnoses and treatments. This frequently results in inequitable and unfavorable outcomes for historically under-represented populations.1 One recent study found that a commonly used algorithm in the US was reducing the number of Black patients identified as needing clinically necessary extra care by more than half. 

“Health systems need to adopt a comprehensive and continuous approach to reviewing the underlying AI datasets for potential bias to avoid perpetuating any inequities,” the authors said.  

The recent executive order from President Biden also explicitly cited health inequities as a risk that regulatory agencies will increasingly scrutinize in healthcare AI applications.      

2. Are we ensuring our AI applications will benefit all patient cohorts within our community without negatively impacting specific populations? When health systems develop AI use cases, they need to consider the solutions in the context of all the populations served within the health system.   

The focus should be to ensure that AI initiatives do not widen the divide in service and outcomes between “attractive” patient populations (e.g., patients with higher-reimbursing health plans, which are generally commercial) and those often deemed “less desirable” (e.g., lower-reimbursing populations, such as Medicare and Medicaid beneficiaries and self-pay patients).     

In focusing on the biggest opportunities for AI applications, populations that aren’t considered a priority can easily be negatively impacted when it comes to designing models and assessing results on target populations. Without considering all populations, the result will most likely be worse outcomes for patients in “less attractive” populations.2 

3. Will our AI pursuits impact the make-up of our workforce? The focus should be on how the health system upskills existing employees and enables entry points and training for people from the local community to hold positions within the organization.     

“For roles that will not be eliminated, leaders should proactively communicate with employees about the intent and positioning of AI as an augmentation of their jobs, rather than a replacement,” the authors said. 

What’s next? 

Considerable, potential adverse impacts of AI on under-resourced communities are avoidable. Health systems can work to not only minimize these risks as they target select populations but also use their AI tools to proactively advance efforts that can improve health equity.  

Here are four essential steps: 

1. Establish enterprise AI governance with health equity leaders at the table. Discrete enterprise AI governance is a best practice for any health system focused on materially deploying AI.     

Governing bodies should include internal and external leaders who specifically work with under-resourced communities. They can help establish additional guardrails (such as data quality, diversity, and representation checklists) and ensure AI tools are fueled by equitably representative data and unbiased algorithms that seek to identify and ameliorate—rather than perpetuate—health disparities.      

2. Develop enterprise AI use guidelines, keeping the impacts on under-resourced populations top of mind. Leveraging AI appropriately can become a major strategic differentiator for health systems that get it right. But it can have substantive consequences for organizations that get it wrong.     

A critical requirement to realize the benefits—while minimizing the risk of unintended consequences—is having sufficient AI use guidelines in place. As a health system defines these guidelines, it must consider not just the general rules to follow (e.g., no direct clinical information is provided to patients without a human intermediary). The health system should also consider the rules it will apply to systematically evaluate and monitor the impacts on under-resourced populations.    

3. Concurrently evaluate the upside opportunities and downside risks with each AI use case. To identify efforts with the greatest strategic value, health systems should systematically assess and validate use cases. They also should explicitly understand and consistently quantify the possible risks of adopting them.  

Pursuing these activities through a health equity lens means health systems should prioritize opportunities for AI tools to improve care and outcomes where disparities exist. It also requires carefully assessing AI outputs to determine whether and how discrete populations are represented and impacted. Health systems also need to hedge against AI data set limitations and bias by casting a wider analytics net beyond their own data, including community health-related information.    

4. Deploy a proactive change management approach to fostering trust and driving adoption. While an abundance of excitement surrounds recent AI breakthroughs, that enthusiasm is often matched by anxieties about AI’s potential impact on both patients and the health system workforce.     

The prospect of technology replacing human workers is a real and understandable fear for many, particularly employees in administrative and operational support roles who are commonly the focus of AI applications. These positions also tend to be disproportionately filled by employees of color.  

To assuage anxieties, leaders must explicitly assess near- and long-term AI use case effects on employment and develop active mitigation tactics for displaced workers, such as job placement and training. They must also actively communicate with staff and the community about how the health system is deploying AI and safeguards against unintended consequences.   

AI has the potential to become a tremendous force for good in healthcare. AI tools can help identify and enable opportunities to improve health equity in ways previously unimaginable. But without diligent oversight and careful scrutiny, those tools can easily perpetuate, increase, and create inequities—turning an organization’s well-intended efforts into a negative impact on its community.   

By employing these steps as part of their approach to AI deployment, health system leaders can establish the necessary guardrails, collaborations, and strategies for AI use. As a result, they will harness AI’s potential benefit to their health system’s future performance and advance the health system’s mission of providing a healthier future for all the communities it serves.  

 Sources 

1 “AI in Medicine Needs to Be Carefully Deployed to Counter Bias—and Not Entrench It” and “Racial and Ethnic Bias in Risk Prediction Models for Colorectal Cancer Recurrence When Race and Ethnicity Are Omitted as Predictors.”   

2 For instance, when only images of white patients are used to train AI algorithms to spot melanoma, the result can be worse outcomes for people of color. “AI Could Worsen Health Inequities for UK’s Minority Ethnic Groups—New Report.”  


ABOUT CHARTIS

The challenges facing US healthcare are longstanding and all too familiar. We are Chartis, and we believe in better. We work with more than 900 clients annually to develop and activate transformative strategies, operating models, and organizational enterprises that make US healthcare more affordable, accessible, safe, and human. With more than 1,000 professionals, we help providers, payers, technology innovators, retail companies, and investors create and embrace solutions that tangibly and materially reshape healthcare for the better. Our family of brands—Chartis, Jarrard, Greeley, and HealthScape Advisors—is 100% focused on healthcare and each has a longstanding commitment to helping transform healthcare in big and small ways. Learn more.

Want more fresh perspectives to help you think about, plan, and execute strategies for what’s next in healthcare? Subscribe to our latest thinking and check out our weekly blog, Chartis Top Reads.


Corey Williams

Diversity, Equity, & Inclusion Consultant, Strategist & Leader | Connecting people across differences for transformative work | Designing and deploying metrics-driven DEI strategies

6mo

"Develop enterprise AI use guidelines, keeping the impacts on under-resourced populations top of mind." Yes! The human must mitigate the use of technologies that increase the speed of complex decision-making. We've seen the inequity risks from algorithmic-driven care and intense pressure on physicians' time - my personal hope is that AI gives physicians back the time to slow down and see the human beyond the data. If we simply increase efficiency with the use of AI, my fear is that we will also increase the efficiency of discrimination and inequality.

Like
Reply
Maria Flowers, Ed.D.

Quality Healthcare Strategist | Patient Safety | Health Equity | Keynote Speaker | Expertise in organizational change initiatives

6mo

Great Article... It's important for healthcare organizations to consider the development and implementation of AI as two separate phases. Involving experts who understand health equity frameworks at both phases will be critical to ensure that AI systems promofe quality, safe, and equitable care.

Like
Reply
Rhonda J. Manns, MBA, BSN, RN, CCM

Design-thinking, Board-certified CCM, RN + MBA the intersection of Clinical Informatics, Business, Product & Tech. Futurist in nurse-led innovation, Health Equity & Clinical Transformation.

6mo

One thing that I especially liked about this article was the first question, do we have enough data to represent the communities that we want to serve. Often, the answer is no, either through participation or a lack of intentionality. Or… The best-case scenario is a lack of data stratification. So, I liked your article's position about harnessing AI potential but also being observant that certain exceptions and gaps and holes exist. So the question should be, how might we make sure that we manage the risk as we leverage AI towards better response and initiative development? Great stuff, Chartis

To view or add a comment, sign in

More articles by Chartis

Insights from the community

Others also viewed

Explore topics