The Responsible AI Bulletin #15: LLMs - bias and fairness, misinfo and user feedback, and conversational swarm intelligence.
Welcome to this edition of The Responsible AI Bulletin, a weekly agglomeration of research developments in the field from around the Internet that caught my attention - a few morsels to dazzle in your next discussion on AI, its ethical implications, and what it means for our future.
For those looking for more detailed investigations into research and reporting in the field of Responsible AI, I recommend subscribing to the AI Ethics Brief, published by my team at the Montreal AI Ethics Institute, an international non-profit research institute with a mission to democratize AI ethics literacy.
Bias and Fairness in Large Language Models: A Survey
Rapid advancements in large language models (LLMs) have enabled the understanding and generation of human-like text, with increasing integration into systems that touch our social sphere. Despite this success, these models can learn, perpetuate, and amplify harmful social biases. This paper presents a comprehensive survey of bias evaluation and mitigation techniques for LLMs. We first consolidate, formalize, and expand notions of social bias and fairness in natural language processing. We then unify the literature by proposing three intuitive taxonomies: two for bias evaluation, namely metrics and datasets, and one for mitigation. Our first taxonomy of metrics for bias evaluation organizes metrics by the different levels at which they operate in a model: embeddings, probabilities, and generated text. Our second taxonomy of datasets for bias evaluation categorizes datasets by their structure; we also release a consolidation of publicly available datasets for improved access. Our third taxonomy of techniques for bias mitigation classifies methods by their intervention during pre-processing, in-training, intra-processing, and post-processing. Finally, we identify open problems and challenges for future work. Synthesizing a wide range of recent research, we aim to provide a clear guide of the existing literature that empowers researchers and practitioners to better understand and prevent bias propagation in LLMs.
Continue reading here.
Listen to What They Say: Better Understand and Detect Online Misinformation with User Feedback
While online misinformation is a crucial issue for societies to address the growing distrust citizens express against their institutions, current research falls under three limitations: First, it is largely based on US data, providing a biased view of the phenomenon, as if misinformation were the same in every country. Second, it rarely analyses misinformation content itself (social media posts) but databases annotated by fact-checkers. Third, it approaches misinformation only from the perspective of fact-checkers with little attention to how social media users perceive it.
This paper suggests an original approach to fill these gaps: leveraging mixed methods to examine misinformation from the reporters’ perspective, both at the content level and comparatively across regions and platforms.
Recommended by LinkedIn
The authors present the first research typology to classify user reports, i.e. social media posts reported by users as ‘false news’, and show that misinformation varies in volume, type, and manipulative technique across countries and social media. They identify six manipulative techniques used to convey misinformation and four profiles of ‘reporters’, as social media users reporting online content to platform moderators for distinct purposes. This allows the author to explain 55% of inaccuracy in misinformation reporting, suggest ways to reduce it and present an algorithmic model capable of classifying user reports to improve misinformation detection tools.
Continue reading here.
Conversational Swarm Intelligence (CSI) Enhances Groupwise Deliberation
They say, “many minds are better than one.” This is very true, for the collective intelligence of human groups increases greatly with population size. It’s also true that real-time conversations are a critical method by which teams evaluate complex problems and reach thoughtful solutions. These two facts, when combined, suggest that a powerful form of collaboration would be to enable real-time conversational deliberations among dozens, hundreds, or even thousands of networked individuals in unison.
Unfortunately, real-time conversations degrade with groups larger than 5 to 7 people. That’s because turn-taking dynamics rapidly fall apart, providing less “airtime” per person and less ability to respond thoughtfully to others. In fact, putting 50 people in a chat-room or zoom conference would not yield a “conversation” but just a stream of individual remarks. That’s not deliberation and it’s not scalable to hundreds or thousands of people.
In this paper we describe Conversational Swarm Intelligence (CSI), an innovative technology modeled on the dynamics of fish schools, bird flocks, and bee swarms. It works by breaking large groups into small overlapping subgroups, each sized for thoughtful conversation. We then use Artificial Conversational Agents to propagate conversational content across the full population in real-time. This gives the deliberative benefits of focused discourse combined with the collective intelligence benefits of aggregating large-scale groups.
Continue reading here.
Comment and let me know what you liked and if you have any recommendations on what I should read and cover next week. You can learn more about my work here. See you soon!
Polymath & Self-educated ¬ Business intelligence officer ¬ AI hobbyist ethicist - ISO42001 ¬ Editorialist & Business Intelligence - Muse™ & Times of AI ¬ Techno humanist & Techno optimist ¬
1yGNÔSIS Grenoble