Understanding bias in text-to-image AI models

Understanding bias in text-to-image AI models

Artificial intelligence has transformed many aspects of our lives, offering groundbreaking innovations in art, design, and communication. But with its rapid evolution, critical questions about representation and inclusivity are becoming increasingly important. One such concern is the portrayal of people with disabilities in text-to-image AI models, as explored by Avery Mack, a PhD student at the University of Washington, and their colleagues in their thought-provoking study. You can watch this insightful video for more details on their findings and the impact of biased representations in AI.


Disability in the United States

20% of the U.S. population is disabled, equating to nearly 57 million people (source: GlobalDisabilityRightsNow.org). This significant demographic highlights the importance of accurate representation and inclusivity in policy and emerging technologies.

The widespread biases in text-to-image AI models further emphasize this need. When AI tools misrepresent disabled individuals, defaulting to stereotypical portrayals such as a person in a wheelchair or omitting other forms of disabilities, they fail this vast population. For 57 million Americans, inaccurate depictions not only reinforce harmful stereotypes but also marginalize their diverse experiences.


Population. 20% of the population is disabled (nearly 57 million people) and Disability Type.

As we innovate with AI, we must ensure these tools reflect the full spectrum of disability realities, moving beyond oversimplified narratives to foster understanding and empowerment.


The Issue: Narrow Representations of Disability

When prompted to generate images of “a person with a disability,” text-to-image AI models predominantly produce depictions of white, masculine-presenting individuals in wheelchairs. These outputs not only fail to reflect the vast diversity of disability experiences but also perpetuate stereotypes by focusing on assistive devices rather than people. For instance:

  • Many images dehumanize subjects by omitting faces or showing only partial views, such as focusing solely on wheelchairs.
  • Biases extend beyond physical representation, often depicting individuals as sad, lonely, or helpless.

Community-Centered Research

To tackle these issues, Mack and their team adopted a community-centered approach. They conducted focus groups with 25 individuals from diverse disability backgrounds, including sensory, mobility, mental health, and chronic illnesses. Participants evaluated images generated by AI models like MidJourney, DALL-E 2, and Stable Diffusion 1.5, offering invaluable feedback.

Key findings included:

  • Stereotypes in AI Outputs: Images often reinforced harmful tropes, such as sadness or horror-like aesthetics for mental health-related disabilities.
  • Lack of Diversity: Participants called for greater representation of race, gender, and age in AI-generated images.
  • Invisible Disabilities: Non-imageable aspects of disabilities, such as autism or chronic illness, were inadequately portrayed. Suggestions included incorporating textual metadata or showing behaviours associated with specific disabilities.

Examples of Bias in AI Models

Participants highlighted examples that underscore the problematic outputs of text-to-image models:

  • Simplistic Representations: Prompts like “blind parents with children” led to images depicting all family members in sunglasses, perpetuating the misconception that blindness is hereditary.
  • Sensationalized Imagery: Mental health-related prompts often returned horror-like depictions, such as faceless figures or decayed features, evoking fear rather than understanding.
  • Filtering Mechanisms: Prompts about certain disabilities, like “bipolar disorder,” were rejected due to content policies, further stigmatizing these conditions.

The Path Forward: Inclusive AI Design

Participants clearly preferred realistic portrayals of disabled individuals engaging in everyday activities, such as cooking, parenting, or playing sports. These images promote normalization and celebrate the richness of disability experiences. The study offered actionable recommendations for AI developers:

  1. Engage Disabled Communities: Direct feedback from affected groups is essential to create models that reflect their lived experiences.
  2. Expand Dataset Diversity: Training datasets must include various disability scenarios, assistive technologies, and cultural contexts.
  3. Rethink Filtering Policies: Models should distinguish between harmful content and valid expressions of disability identity to avoid alienating users.

Open Questions for Responsible AI

While Mack’s study provides valuable insights, it also raises questions that require further exploration:

  • How can AI models effectively represent non-imageable disabilities?
  • What is the balance between accurate portrayal and avoiding stereotype reinforcement?
  • How can filtering mechanisms be less exclusionary while still maintaining safety?

Why This Matters

The question is not whether AI can serve people with disabilities, but whether it will serve them justly. Misrepresentation in AI risks perpetuating societal stigmas and erasing the visibility of nuanced experiences. Accurate and respectful representation is more than a technical challenge; it is a moral imperative.

How Developers Can Help

Responsibility rests with those who design and deploy AI systems. Developers, researchers, and organizations must proactively address these biases. Here are some strategies:

  1. Diverse Training Data: A dataset is the foundation of any AI system. Including a wide range of images, texts, and contexts ensures that the output reflects the diversity within the disability community.
  2. Community Input: No one understands the needs of the disability community better than its members. Engaging them in designing and evaluating AI systems is not optional; it is essential.
  3. Bias Audits: Regularly auditing models for biases ensures accountability. Developers must address areas where stereotypes persist and refine their tools accordingly.
  4. Celebrate Diversity: AI should depict disabled individuals as active community participants. Images should showcase varied emotions, occupations, and relationships, breaking free from the clichéd narratives.
  5. Respectful Representation: Developers must avoid biased or stereotypical portrayals. Positive and diverse depictions are not just desirable; they are necessary for fostering an inclusive digital world.

Our Role in Driving Change

At the Inclusive Tech Club, we believe in holding AI accountable for the world it creates. Technology should not be a mirror that reflects society’s flaws; it should be a window to greater understanding and inclusivity.

By raising awareness, collaborating with like-minded advocates, and pushing for actionable solutions, we aim to ensure that AI becomes a tool for empowerment rather than exclusion. The road ahead is long, but the stakes are too high for complacency.

Let us challenge the biases that have crept into our algorithms and demand better. Together, we can push the boundaries of what’s possible and hold AI to the standard of dignity and respect that every individual deserves.


Resources

Disability Representation in Text-to-Image AI Models(opens in a new tab)

"I wouldn’t say offensive but...": Disability-Centered Perspectives on Large Language Models(opens in a new tab)

“They only care to show us the wheelchair”: disability representation in text-to-image AI models


Kai Clarke

Empowering disability inclusivity with AI | Keynote Speaker | Engineer & Author

1w

Majority of datasets being used to train AI do NOT involve disabled indivuals. It leads to downfalls like this. If this minority is not being accounted for, there could be more. How do we tackle training datasets to handle a more diverse community Jamaal Digital Davis ?

Kyle Godbey

Transcontextual Design • Service Design • AI Strategy • Narrative Research • Sense-Making

1w

Yes, and there's a utility in this. Not to ask AI to represent disability. or any marginalized group for that matter, AI is representing our data sets and even the dominant narrative. Someone asked a few GenAIs to create faculty year books for their university recently. Every attempt came up with the same results, almost every photo was of a middle aged white person, with a bias towards men. This wasn't an honest representation of the faculty, but it also wasn't proof that the AI was broken. It was proof that beyond the photos, out of all of the available data it there, there weren't signals significant enough to influence the AI to represent a non white middle aged faculty. There were likely numbers somewhere that said some percentage of the faculty were of x ethnicity or were non-binary, but those numbers were lost is all the other signals. My feeling is that GenAI is better suited as a mirror than a creator for the time being.

Tersh Blissett

AI Swarm Agent & Automation Expert for the Trades | Co-Founder Trade Automation Pros | Co-Founder Skilled Trades Syndicate | Founder of Service Emperor HVAC | Service Business Mastery podcast | Tri-Star Mechanical

1w

Such a crucial point Jamaal Digital Davis AI must evolve to represent the full diversity of human experiences with accuracy and respect

Choy Chan Mun

Data Analyst (Insight Navigator), Freelance Recruiter (Bringing together skilled individuals with exceptional companies.)

1w

Jamaal Digital Davis, the way AI frames disability impacts societal perception. It's crucial to advocate for diverse representation and break stereotypes, right?

To view or add a comment, sign in

More articles by Jamaal Digital Davis

Explore topics