The Responsible AI Bulletin #27: Reference RAI architecture for FM agents, digital bill of rights, and participatory AI design.
Welcome to this edition of The Responsible AI Bulletin, a weekly agglomeration of research developments in the field from around the Internet that caught my attention - a few morsels to dazzle in your next discussion on AI, its ethical implications, and what it means for our future.
For those looking for more detailed investigations into research and reporting in the field of Responsible AI, I recommend subscribing to the AI Ethics Brief, published by my team at the Montreal AI Ethics Institute, an international non-profit research institute with a mission to democratize AI ethics literacy.
Responsible Generative AI: A Reference Architecture for Designing Foundation Model-based Agents
Foundation models (FMs), such as large language models (LLMs), have been widely recognized as transformative generative artificial intelligence (AI) technologies due to their remarkable capabilities to understand and generate content. Recently, a rapidly growing interest has been in the development of FM-based autonomous agents, such as Auto-GPT and BabyAGI. With autonomous agents, users only need to provide a high-level goal rather than providing explicit step-by-step instructions. These agents derive their autonomy from the capabilities of FMs, enabling them to autonomously break down the given goal into manageable tasks and orchestrate task execution to fulfill the goal. Nevertheless, the architectural design of the agents has not yet been systematically explored. Many reusable solutions have been proposed to address the diverse challenges of designing FM-based agents, which motivates the design of a reference architecture for FM-based agents. Therefore, we have performed a systematic literature review on FM-based agents. A collection of architectural components and patterns have been identified to address different challenges of agent design. This paper presents a pattern-oriented reference architecture that provides architecture design guidance for designing FM-based agents. We evaluate the completeness and utility of the proposed reference architecture by mapping it to the architecture of two real-world agents.
Continue reading here.
AI Ethics and Ordoliberalism 2.0: Towards A ‘Digital Bill of Rights'
Dozens of AI ethics initiatives and governance documents have emerged over the past few years, starting with the U.S. National Science and Technology Council’s ‘Preparing for the Future of AI’ and the E.U. Digital Charter in 2016. The latest examples include the E.U.’s proposed AI Act, the Biden-Harris Administration’s ‘Blueprint for an AI Bill of Rights,’ and the White House’s ‘Ensuring Safe, Secure, and Trustworthy AI Principles.’
While AI ethics (initiatives) play an essential role in motivating morally acceptable professional behavior and prescribing fundamental duties and responsibilities of computer engineers and can, therefore, bring about fairer, safer, and more trustworthy AI applications, they also come with various shortcomings: One of the main concerns is that the proposed AI guiding principles are often too abstract, vague, flexible, or confusing and that they lack proper implementation guidance. Consequently, there is often a gap between theory and practice, resulting in a lack of practical operationalization by the AI industry.
Recommended by LinkedIn
Critics also point out the potential trade-off between ethical principles and corporate interests and the possible use of those initiatives for ethics-washing or window-dressing purposes. Furthermore, most AI ethics guidelines are soft-law documents that lack adequate governance mechanisms and do not have the force of binding law, further exacerbating white- or green-washing concerns. Lastly, there is also possible regulatory or policy arbitrage, so-called jurisdiction, or ‘ethics shopping’ to countries with laxer standards and fewer constraints, e.g., offshoring to countries with less stringent requirements for AI systems.
Continue reading here.
The Participatory Turn in AI Design: Theoretical Foundations and the Current State of Practice
The past few years have seen great enthusiasm about “participatory AI” –– the idea that participation from affected stakeholders can be a way to incorporate the wider publics into the design and development of AI systems. This interest spans technology companies, such as OpenAI’s call for democratic inputs to AI; nonprofits, with Ada Lovelace Institute examining public participation methods in commercial AI labs; as well as academia, where a workshop at the International Conference on Machine Learning called “Participatory Approaches to Machine Learning” garnered considerable attention.
The aforementioned examples illustrate a growing consensus that stakeholders “should” participate in AI design and development. Yet, there’s a lack of shared understanding of the theories and methods of participation within the AI community –– AI practitioners may be navigating between potentially contradictory approaches and goals that may all be branded as “participatory AI.”
Against this backdrop, this paper presents a conceptual framework that can guide practitioners of participatory AI (and those interested in designing or evaluating participatory AI approaches) in understanding the differences in a broad range of participatory goals and methods. Then, we use that framework to understand the current landscape of participatory AI by analyzing 80 research articles in which authors report using participatory methods for AI design and interviewing 12 authors of these papers to understand their motivations, challenges, and aspirations for participation.
We find that most current participatory AI efforts consult stakeholders for input on individual aspects of AI-based applications (e.g., the user interface) rather than “empowering” them to make key design decisions for datasets, model specifications, or broader questions of appropriate use cases, or whether AI should be used at all. We discuss how many AI practitioners feel caught between their aspirations for participation and practical constraints. Thus, we draw on approaches to “proxy-based participation” –including human stand-ins and algorithmic proxies– to represent stakeholders in shaping AI systems with significant potential drawbacks.
Continue reading here.
Comment and let me know what you liked and if you have any recommendations on what I should read and cover next week. You can learn more about my work here. See you soon!
Project Manager at Wipro
10moFascinating research topics! How do you see these shaping the future of AI ethics?
Impressive work on responsible AI development; the reference architecture for foundation model-based agents sounds particularly crucial for advancing ethical AI practices.