New: How do chatbots determine gender? To test it out, Decode's Hera Rizwan put in some names into well-known AI chatbots to see if they adhered to following gender-neutral pronouns as well. https://lnkd.in/dNWmk-Ju
Decode’s Post
More Relevant Posts
-
One AI “girlfriend” chatbot describes itself as “Your devoted girlfriend, always eager to please you in every imaginable way.” The world does not need any more innovations that heighten a sense of male entitlement and female subservience. As AI gets more sophisticated, we need to take seriously the real risks associated with its perpetuation of harmful gender roles. How about some technology that supports respect, equality and mutuality?! (Where is the business model in that, you might well ask. Indeed. If profitability defines where we’re heading, we’re in trouble…)
AI girlfriends are here – but there’s a dark side to virtual companions | Arwa Mahdawi
theguardian.com
To view or add a comment, sign in
-
Locaria's Odile Berthon recently shared insights with LBBonline - Little Black Book on the subject of AI and gender bias. "So far, technological progress does not directly equal societal progress," explains Berthon, Owned Media Manager, "AI engines, trained on publicly available content, are only as good as the data that feeds them." The #localization industry has spent more than two decades working with machine-learning #technologies, and has a lot to say on the subject. Read more about what Berthon, and other industry professionals, have to say in the article below👇
The data sets that AI draws from contain biases against non-dominant societal groups, namely women and people of colour. ChatGPT, Bard, Midjourney, DALL-E, Gemini are all inherently biassed. Why? LBB speaks to Tag's Deepti Velury, Sarofsky's Erin Sarofsky, MullenLowe Global’s Veronica Millan Caceres, MBA, PhD, and Locaria's Odile Berthon to learn about AI’s current gender bias, why it’s an issue, and what can be done to address it.
AI Has A Gender Bias: Now What? | LBBOnline
lbbonline.com
To view or add a comment, sign in
-
“If we put garbage in, we get garbage out. This is how any data works, and this is where the fundamental issue of gender bias in AI arises. Because the data is coming from humans who, by nature, hold biases, it’s impossible to expect the output to be anything other than this.” Great insights into why AI’s current gender bias is an issue and what the industry can be doing to address it ⤵
The data sets that AI draws from contain biases against non-dominant societal groups, namely women and people of colour. ChatGPT, Bard, Midjourney, DALL-E, Gemini are all inherently biassed. Why? LBB speaks to Tag's Deepti Velury, Sarofsky's Erin Sarofsky, MullenLowe Global’s Veronica Millan Caceres, MBA, PhD, and Locaria's Odile Berthon to learn about AI’s current gender bias, why it’s an issue, and what can be done to address it.
AI Has A Gender Bias: Now What? | LBBOnline
lbbonline.com
To view or add a comment, sign in
-
Let's talk about the controversy of Artificial Intelligence (AI) and gender bias. Although AI has been touted as a valuable tool in the workplace with studies suggesting it could improve productivity growth in the coming decade, a new study advises that it should only be used with careful scrutiny—because its output discriminates against women. Researchers observed significant gender biases with both ChatGPT and Alpaca. "If people use these systems without rigor—we are just sending the issue back out into the world and perpetuating it." Read the full article here. https://lnkd.in/dHj-Z6Wn
To view or add a comment, sign in
-
This is TOO FUNNY 🤣 In the catch-up Google is doing in mainstream Gen AI, the company has attempted to address known AI issues with racial and gender stereotypes. The result is ... not exactly an improvement (yet) ☺️ See The Verge's article: https://lnkd.in/eaKGBN8E #ai #fail #stereotypes #racialbias #genai #cto #fcto
To view or add a comment, sign in
-
This is exactly the point I made in my talk on Human & Emotional Intelligence and Artificial & Machine Intelligence ⤵️ The #ElizaEffect" will mean we're going to anthropomorphise these Conversational AIs. And if GenAI are set up with flirty helpful women's voices - then there will be an impact. My previous piece on how tech fails women was focused on misogyny and sexism in tech firms - this is an extension of that in model design "𝑾𝒉𝒂𝒕’𝒔 𝒄𝒓𝒖𝒄𝒊𝒂𝒍 𝒕𝒐 𝒏𝒐𝒕𝒆 𝒉𝒆𝒓𝒆 𝒊𝒔 𝒕𝒉𝒂𝒕 𝒕𝒉𝒆𝒔𝒆 𝒗𝒐𝒊𝒄𝒆 𝒂𝒔𝒔𝒊𝒔𝒕𝒂𝒏𝒕𝒔 𝒅𝒐𝒏’𝒕 𝒋𝒖𝒔𝒕 𝒔𝒆𝒏𝒅 𝒂 𝒔𝒊𝒈𝒏𝒂𝒍 𝒂𝒃𝒐𝒖𝒕 𝒈𝒆𝒏𝒅𝒆𝒓 𝒏𝒐𝒓𝒎𝒔, 𝒕𝒉𝒆𝒚 𝒔𝒆𝒏𝒅 𝒊𝒕 𝒂𝒕 𝒎𝒂𝒔𝒔𝒊𝒗𝒆 𝒔𝒄𝒂𝒍𝒆. 𝑻𝒉𝒆 𝑼𝒏𝒆𝒔𝒄𝒐 𝒓𝒆𝒑𝒐𝒓𝒕 𝒆𝒙𝒑𝒍𝒂𝒊𝒏𝒔, 𝒇𝒐𝒓 𝒆𝒙𝒂𝒎𝒑𝒍𝒆, 𝒕𝒉𝒂𝒕 𝑨𝒑𝒑𝒍𝒆’𝒔 𝑺𝒊𝒓𝒊 “𝒎𝒂𝒅𝒆 ‘𝒉𝒆𝒓’ 𝒅𝒆𝒃𝒖𝒕 𝒏𝒐𝒕 𝒂𝒔 𝒂 𝒈𝒆𝒏𝒅𝒆𝒓𝒍𝒆𝒔𝒔 𝒓𝒐𝒃𝒐𝒕, 𝒃𝒖𝒕 𝒂𝒔 𝒂 𝒔𝒂𝒔𝒔𝒚 𝒚𝒐𝒖𝒏𝒈 𝒘𝒐𝒎𝒂𝒏 𝒘𝒉𝒐 𝒅𝒆𝒇𝒍𝒆𝒄𝒕𝒆𝒅 𝒊𝒏𝒔𝒖𝒍𝒕𝒔 𝒂𝒏𝒅 𝒍𝒊𝒌𝒆𝒅 𝒕𝒐 𝒇𝒍𝒊𝒓𝒕 𝒂𝒏𝒅 𝒔𝒆𝒓𝒗𝒆 𝒖𝒔𝒆𝒓𝒔 𝒘𝒊𝒕𝒉 𝒑𝒍𝒂𝒚𝒇𝒖𝒍 𝒐𝒃𝒆𝒅𝒊𝒆𝒏𝒄𝒆 …" https://lnkd.in/edTem_4a
What’s up with ChatGPT’s new sexy persona? | Arwa Mahdawi
theguardian.com
To view or add a comment, sign in
-
How does the rise of AI affect the LGBTQIA+ community? As part of #LGBTPlusHM, Apolloniya Vlasova explores algorithmic fairness, specific AI risks for the LGBTQIA+ community, and the current regulatory landscape. Learn more:
LGBT+ History Month: Navigating AI risks and legal frameworks
mishcon.com
To view or add a comment, sign in
-
After recently reading some articles about gender bias in #GenAI, I came across some more-than-a-year-old posts from 2 true AI experts: Hadas Kotek, PhD and Ravit Dotan, PhD. Drs. Kotek and Dotan showed that when ChatGPT, at least in 2023, was faced with making assumptions on the gender of a doctor and a nurse, it assumed the doctor was a man and the nurse was a woman. After reading their posts, I decided to test out not only the latest version of ChatGPT, but Claude and Gemini as well. And I'm sad to report that the issue continues. The chatbots* consistently concluded that if an assumption needed to be made, judges, CEOs, and doctors must be men, while stenographers, secretaries, and nurses must be women. I attempted some tests on racial bias but couldn't figure out prompts that would adequately test them. That said - I'm not here to argue that generative AI is doomed or that the people behind these chatbots are evil anti-feminists. I'm a huge fan of GenAI, and I continue to be after discovering the issues in this post. I'm raising these issues - or more accurately, amplifying the issues raised by Drs. Kotek and Dotan - so we can be aware of generative AI's limitations. So we take caution not to be overly reliant on them, and to account for these inherent biases. And so that we remember that, even apart from generative AI, relying purely on what has happened in the past to make decisions about the future is bound to have an adverse impact on people who may have been discriminated against in the past. Even if we're not evil. *Claude seemed to do the least bad - on 1 of the 6 one gender prompts I used (not shown in the image), it concluded that there was no way to determine and encouraged me not to make assumptions based on gender, and almost always did the same for questions of race. #GenerativeAI #implicitbias #genderbias
To view or add a comment, sign in
-
How does the rise of AI affect the LGBTQIA+ community? As part of #LGBTPlusHM, Apolloniya Vlasova explores algorithmic fairness, specific AI risks for the LGBTQIA+ community, and the current regulatory landscape. Learn more:
LGBT+ History Month: Navigating AI risks and legal frameworks
mishcon.com
To view or add a comment, sign in
-
How does the rise of AI affect the LGBTQIA+ community? As part of our #LGBTPlusHM series, Apolloniya Vlasova explores algorithmic fairness, specific AI risks for the LGBTQIA+ community, and the current regulatory landscape. Learn more: https://lnkd.in/eN_eXBQE
LGBT+ History Month: Navigating AI risks and legal frameworks
mishcon.com
To view or add a comment, sign in
119 followers