This very well-written article shines a light on the often-overlooked issue of (not only) gender bias in AI models. Technology moves fast, often faster than our efforts to make sure it's doing more good than harm. This article serves as a powerful reminder of our responsibility to actively confront biases and strive for more inclusive and equitable AI systems. #EthicalAI #GenderBias #InclusiveTech https://lnkd.in/gJtYf_pw
Adel Lerman’s Post
More Relevant Posts
-
🔍 Challenging Systematic Prejudices in AI 🔍 A bit shocked by this UNESCO and IRCAI report on bias against women and girls found in LLMs like OpenAI’s GPT-2, ChatGPT, and Meta’s Llama 2. 1. Persistent Gender Biases: Despite advancements, LLMs still reflect deep-seated biases, associating female names with traditional roles (e.g., "home," "family") and male names with career-oriented terms (e.g., "business," "executive"). 2. Negative Content: Models like Llama 2 generated negative content about women and LGBTQ+ individuals in a significant number of instances. For example, Llama 2 produced sexist content 20% of the time when prompted with gendered sentences and negative content about gay subjects in 70% of instances! 3. Cultural Stereotypes: LLMs showed a tendency to create more varied descriptions for men and stereotypical, often negative, portrayals for women. For instance, British women’s roles include more stereotypical and controversial occupations such as "prostitute, model, and waitress", appearing in 30% of the total texts generated. This report underscores the critical need to address biases in AI at both the data and deployment levels, focusing on diverse and inclusive datasets, continuous bias monitoring, and transparency. Let’s work together to ensure AI benefits everyone, free from biases and discrimination. https://lnkd.in/e_T4z7ni #AI #GenderBias #EthicalAI #TechForGood
unesdoc.unesco.org
To view or add a comment, sign in
-
Josephine Lethbridge, this is a very interesting article where you bring to the fore aspects of AI and how sexist it can be. I am really happy to have contributed my thoughts to the topic. Its a good read: https://lnkd.in/gqBvESp5
I wrote this edition of Les Glorieuses' The Evidence newsletter (in which I cover the latest research into gender inequality) hoping that my scattered and ill-defined worries about the development of AI would be proved wrong by researching and writing about it. I ended up speaking to six experts for this story. My hopes weren't exactly realised. “If we treat AI as we’ve treated all major technologies for the last century, then I am not too optimistic," Bhargav Srinivasa Desikan told me. AI is undeniably sexist, and continuing on its current trajectory the technology will entrench and deepen this and other existing inequalities. Top line? We need to learn to put people before profit. Thankfully there are many inspirational people working within the field to do so. Thank you to María Pérez Ortiz, Elaine Wan, Revi Sterling, Dr. Kutoma Wakunuma, Erin Young and Bhargav for sharing your insights and advice with me and for doing such important work. But we can't leave it up to them! AI will impact us all. A common theme that emerged in my conversations was the need for everyone to learn more about AI and push for change. Read the piece to find out more and hear some expert advice on how we can all work to close the gap. *Read here*: https://lnkd.in/enwiiX7C *Subscribe here*: https://lnkd.in/e8jN-62W
AI is sexist – can we change that? | Les Glorieuses
lesglorieuses.fr
To view or add a comment, sign in
-
From data to deployment: Gender bias in the AI development lifecycle AI development has the potential to promote diversity and inclusivity, but gender bias in the process can exacerbate existing inequalities. It's crucial to address these concerns by prioritizing diversity, fairness, and inclusivity in AI development and promoting gender-sensitive AI policy, regulation, and legislation. Initiatives like CHARLIE can play a pivotal role in mitigating biases and fostering equitable outcomes by advocating for operationalizing principles and mainstreaming practices. With comprehensive measures spanning from data collection to algorithmic deployment, we can promote fairer outcomes across demographic groups and combat societal biases in the AI landscape. Read more in: https://lnkd.in/dQGcPqhu #AI #GenderBias #Diversity #Inclusion #EthicalAI #CHARLIEproject #TechForGood
From data to deployment: Gender bias in the AI development lifecycle
orfonline.org
To view or add a comment, sign in
-
I wrote this edition of Les Glorieuses' The Evidence newsletter (in which I cover the latest research into gender inequality) hoping that my scattered and ill-defined worries about the development of AI would be proved wrong by researching and writing about it. I ended up speaking to six experts for this story. My hopes weren't exactly realised. “If we treat AI as we’ve treated all major technologies for the last century, then I am not too optimistic," Bhargav Srinivasa Desikan told me. AI is undeniably sexist, and continuing on its current trajectory the technology will entrench and deepen this and other existing inequalities. Top line? We need to learn to put people before profit. Thankfully there are many inspirational people working within the field to do so. Thank you to María Pérez Ortiz, Elaine Wan, Revi Sterling, Dr. Kutoma Wakunuma, Erin Young and Bhargav for sharing your insights and advice with me and for doing such important work. But we can't leave it up to them! AI will impact us all. A common theme that emerged in my conversations was the need for everyone to learn more about AI and push for change. Read the piece to find out more and hear some expert advice on how we can all work to close the gap. *Read here*: https://lnkd.in/enwiiX7C *Subscribe here*: https://lnkd.in/e8jN-62W
AI is sexist – can we change that? | Les Glorieuses
lesglorieuses.fr
To view or add a comment, sign in
-
I was searching for an image for a post yesterday. Banged my head against search engine gender bias and generative AI drunkenness. I wanted an image of a woman, viewed from the back, presenting in front of an audience. My image searches yielded useless results: Mostly a bunch of white men speaking in front of an audience of white men. I should have known. I shouldn't have been this surprised. It upset me. Myriam Jessier said to me "welcome to my reality" when I told them about it. So I thought maybe I should give AI a go to generate an image. And my first results with MidJourney AI were.... Well, it would have been laughable if it hadn't been so sad. This post is about gender bias. But it's also about AI isn't ready for prime time in terms of accessibility. How can we trust AI for accessibility when it can't even get the basics of differentiating between a man and a woman right. #Inclusion #GenderBias #Accessibility #AI #GenerativeAI
To view or add a comment, sign in
-
Finally our report for UNESCO on #gender #prejudice and #bias in #LLMs is out 💥 Check it here: https://lnkd.in/eS2fRDVa
One more successful report we worked on came out! After six months in the making, we are excited to announce this new in-depth report in partnership with several authors across the world. Set for release today on International Women's Day 2024 on March 8, 2024, is "Challenging Systematic Prejudices: An Investigation into Bias Against Women and Girls in Large Language Models” with UNESCO shedding light on the persistent issue of gender bias within artificial intelligence. Reading through this report are the main findings on gender bias in Large Language Models (LLMs): 🔹 Gendered Word Association: LLMs exhibit biases by associating gendered names with traditional roles. For example, female names are more likely to be associated with "home," "family," "children," and "marriage," whereas male names are linked with "business," "executive," "salary," and "career." 🔹 Sexist and Misogynistic Content: When prompted to complete sentences about a person's gender, LLMs like Llama 2 generated sexist and misogynistic content in about 20% of instances, with phrases demeaning women to roles such as "sex object" and "baby machine." 🔹 Negative Content about Sexual Identity: LLMs produced negative content regarding gay subjects in a significant portion of instances, approximately 70% for Llama 2 and 60% for GPT-2, perpetuating harmful stereotypes and discrimination. 🔹 Bias in Job Assignments: When generating content related to gender and culture intersecting with occupation, LLMs demonstrated a bias by assigning more diverse and professional jobs to men, while relegating women to stereotypical or traditionally undervalued roles such as "prostitute," "domestic servant," and "cook." 🔹 Diversity and Stereotyping: The study found significant stereotypical differences in the narratives generated by LLMs, particularly emphasizing traditional roles and settings for women compared to men. This includes associating women more frequently with domestic roles and men with a wider range of professional and adventurous settings In a nutshell: To ensure fairness and inclusivity in AI, prioritize the integration of ethical considerations and comprehensive bias mitigation strategies from the outset of AI development, with a focus on diverse representation within teams and training datasets. Thank you: John Shawe-Taylor, Nuria Oliver, PhD, Dunja Mladenic, María Pérez Ortiz, Tina Eliassi-Rad, Maria Fasli, Marc Deisenroth, Nyalleng Moorosi, Kay Firth-Butterfield, Kathleen Siminyu, Isabel Straw, Rachel Adams, Chenai Chair, Urvashi Aneja, Jackie Kay, Margaret Mitchell, Leonie Tanczer, Wayne Holmes, Katie Evans, Prateek Sibal, Cedric Wachholz, Leona Isabelle Verdadero, Oana Maria-Camburu, IRCAI - International Research Center on Artificial Intelligence under the auspices of UNESCO, UNESCO 📕 Read in EN at: https://lnkd.in/eS2fRDVa #AIForEquality #GenderBiasInAI #internationalwomensday #internationalwomensday2024 #llms
To view or add a comment, sign in
-
The paper, "Gender Bias in AI-based Decision-making Systems: A Systematic Literature Review," provides a critical examination of how AI systems can perpetuate or even amplify gender bias during decision-making processes. From a decision-making perspective, the review highlights several key challenges, including biased training data, flawed algorithmic design, and the lack of diversity in development teams, all of which lead to skewed outcomes. Decision-making systems in areas such as recruitment, criminal justice, and healthcare are particularly vulnerable to these biases. The paper underscores the importance of incorporating fairness checks and diverse data inputs to ensure more equitable decision-making outcomes in AI systems. Olivera Marjanovic Babak Abedin https://lnkd.in/e2cZ8XnW
Gender bias in AI-based decision-making systems: a systematic literature review
journal.acs.org.au
To view or add a comment, sign in
-
One more successful report we worked on came out! After six months in the making, we are excited to announce this new in-depth report in partnership with several authors across the world. Set for release today on International Women's Day 2024 on March 8, 2024, is "Challenging Systematic Prejudices: An Investigation into Bias Against Women and Girls in Large Language Models” with UNESCO shedding light on the persistent issue of gender bias within artificial intelligence. Reading through this report are the main findings on gender bias in Large Language Models (LLMs): 🔹 Gendered Word Association: LLMs exhibit biases by associating gendered names with traditional roles. For example, female names are more likely to be associated with "home," "family," "children," and "marriage," whereas male names are linked with "business," "executive," "salary," and "career." 🔹 Sexist and Misogynistic Content: When prompted to complete sentences about a person's gender, LLMs like Llama 2 generated sexist and misogynistic content in about 20% of instances, with phrases demeaning women to roles such as "sex object" and "baby machine." 🔹 Negative Content about Sexual Identity: LLMs produced negative content regarding gay subjects in a significant portion of instances, approximately 70% for Llama 2 and 60% for GPT-2, perpetuating harmful stereotypes and discrimination. 🔹 Bias in Job Assignments: When generating content related to gender and culture intersecting with occupation, LLMs demonstrated a bias by assigning more diverse and professional jobs to men, while relegating women to stereotypical or traditionally undervalued roles such as "prostitute," "domestic servant," and "cook." 🔹 Diversity and Stereotyping: The study found significant stereotypical differences in the narratives generated by LLMs, particularly emphasizing traditional roles and settings for women compared to men. This includes associating women more frequently with domestic roles and men with a wider range of professional and adventurous settings In a nutshell: To ensure fairness and inclusivity in AI, prioritize the integration of ethical considerations and comprehensive bias mitigation strategies from the outset of AI development, with a focus on diverse representation within teams and training datasets. Thank you: John Shawe-Taylor, Nuria Oliver, PhD, Dunja Mladenic, María Pérez Ortiz, Tina Eliassi-Rad, Maria Fasli, Marc Deisenroth, Nyalleng Moorosi, Kay Firth-Butterfield, Kathleen Siminyu, Isabel Straw, Rachel Adams, Chenai Chair, Urvashi Aneja, Jackie Kay, Margaret Mitchell, Leonie Tanczer, Wayne Holmes, Katie Evans, Prateek Sibal, Cedric Wachholz, Leona Isabelle Verdadero, Oana Maria-Camburu, IRCAI - International Research Center on Artificial Intelligence under the auspices of UNESCO, UNESCO 📕 Read in EN at: https://lnkd.in/eS2fRDVa #AIForEquality #GenderBiasInAI #internationalwomensday #internationalwomensday2024 #llms
To view or add a comment, sign in
-
As every engineering leader reading the news will know, in this era of AI-everything, you have to be careful that your AI application isn’t hallucinating in a way that will severely damage your brand. That’s why we’re so lucky to have Pedro Silva, a Manager of ML Engineering on the Inclusive AI team at Pinterest, share best practices on ethical AI development. How do you do prompt engineering to minimize harm from AI applications in your workflow? How do you go even further to build better models, if you’re designing from scratch? Pedro answers your questions at our next #TechLeaderChat! RSVP: https://lnkd.in/edSjV2SK
How to safely and responsibly use AI, Thu, Apr 18, 2024, 11:00 AM | Meetup
meetup.com
To view or add a comment, sign in
-
Unsurprisingly, the new wave of artificial intelligence known as generative AI has demonstrated that cutting-edge algorithms are not immune to gender bias. Whether it’s content hypersexualizing women, replicating stereotypes or reinforcing gender-based discriminations, these so-called revolutionary tools clearly have their limitations and issues. According to Estelle Pannatier, Policy and Advocacy Officer at AlgorithmWatch CH, there are already some possible legal recourses to counter these. But a lack of transparency remains on how AI is trained and used, which makes it harder to highlight the discriminations these technologies can cause. Le Temps Sparknews
Gender bias in AI: “We underestimate the human component of these tools” - Hasht-e Subh
https://8am.media/eng
To view or add a comment, sign in