The Responsible AI Bulletin #11: Ethical concerns with voice assistants, prompt middleware, and dual governance approaches.
Generated using DALL-E 2

The Responsible AI Bulletin #11: Ethical concerns with voice assistants, prompt middleware, and dual governance approaches.

Welcome to this edition of The Responsible AI Bulletin, a weekly agglomeration of research developments in the field from around the Internet that caught my attention - a few morsels to dazzle in your next discussion on AI, its ethical implications, and what it means for our future.

For those looking for more detailed investigations into research and reporting in the field of Responsible AI, I recommend subscribing to the AI Ethics Brief, published by my team at the Montreal AI Ethics Institute, an international non-profit research institute with a mission to democratize AI ethics literacy.


A Systematic Review of Ethical Concerns with Voice Assistants

Generated using DALL-E 2

Is Alexa always recording? Something is unsettling about devices that are always listening to you. The research community has been studying people’s concerns with voice assistants and working on ways to mitigate them. While this began with privacy concerns, it has expanded to explore topics like accessibility, the social nature of interactions with voice assistants, how they’re integrated into the home, and how the design of their personalities fits into societal norms and stereotypes around gender.

This work is incredibly diverse and draws from various disciplines, including computer science, law, psychology, and the social sciences, meaning it can be difficult to follow the frontiers or connect discoveries from one field with another. We systematically reviewed research on ethical concerns with voice assistants to address this.

In addition to a detailed analysis of nine major concerns, we also examined how research on the topic is conducted. Despite diversity and inclusion efforts across many disciplines in the field, 94% of the papers recruiting human participants drew them solely or mostly from Europe and North America. There was also a noticeable shift towards using quantitative methods between 2019 and 2021 (41% to 63%) as research moved online during the pandemic.

Continue reading here.


Prompt Middleware: Helping Non-Experts Engage with Generative AI

Generated using DALL-E 2

Large Language Models (LLMs) are a recent advancement in natural language processing. This form of generative AI is capable of producing high-quality text based on natural language instructions, called “prompts.” However, crafting high-quality prompts is challenging, and currently, it is as much an art as it is a science. 

To facilitate the creation of high-quality prompts, the PromptMaker system from Google Research guided users to generate prompts for few-shot learning by guiding non-experts through the process of articulating examples. A great first step in teaching non-experts the art of prompting, but ultimately, there is more to getting good responses from an LLM than generating “well-structured” prompts. Crucially, it is critical for users to know what models can do and what they should do. 

As an example, consider the generative art model Midjourney. Based on a text prompt, it can generate artistic images that can mimic the styles of artists and photographers. However, individuals not well-versed in artistic nuances might grapple with the intricacies of designating particular artistic styles or contemplating essential elements such as framing, color palette, lighting, and focal points. When aiming for photo-realistic results, it is often necessary to specify the pixel resolution or even the camera aperture within the prompt. These considerations encompass more than mere prompt structuring; they encapsulate insights that seasoned artists inherently possess. This was the goal of our project: to integrate these expert decisions explicitly as interface affordances. 

In our work, we explored how a user interface can communicate to non-experts the key considerations that guide domain experts. We introduce the concept of “Prompt Middleware,” which is a technique to communicate LLM capabilities through interface affordances such as buttons and drop-down menus. We instantiated this approach in a feedback system called FeedbackBuffet. This system exposes best practices in articulating feedback requests through interface affordances and assembles prompts using a template-based approach.

Continue reading here.


Dual Governance: The intersection of centralized regulation and crowdsourced safety mechanisms for Generative AI

Generated using DALL-E 2

Motivation

With the rise of text-to-image Generative AI models, independent artists have discovered an influx of artwork whose style is very similar to theirs. Their work has been crawled without permission, and now there are applications producing images in their distinctive art style given a prompt. Options for recourse are very limited. Sure, the artists may be able to sue the company, but a successful lawsuit takes years. They may be able to use some open-source tools they find to stop future scraping of their work, but this also requires finding reputable and well-maintained tools that fit their needs. 

Furthermore, this situation affects independent artists, large companies, and, of late, actors and writers. Artists have launched a lawsuit against Stability AI and Midjourney, Getty Images is suing Stability AI for unlawful scraping, and recently, the use of ChatGPT and Generative AI to generate scripts for television and recreate actors’ likenesses has become a key point of contention between AMPTP and the Writer’s Guild and SAG-AFTRA in the ongoing strikes.

There are two main problems from these events: Firstly, the absence of centralized government regulations and the lack of access to a channel to report problems encountered, and secondly, the absence of a trustworthy repository of tools that are available for stakeholders to use in such situations. 

Proposed Policy Framework

To tackle these problems and create guardrails around the use of generative AI, we propose a framework that outlines a partnership between centralized regulatory bodies and the community of safety tool developers. We analyzed the current AI landscape and highlighted key generative AI harms. Next, by considering the stakeholders for AI systems, including rational commercial developers of AI systems and consumers of AI products, we defined criteria that would need to be fulfilled by an effective governance framework. We then reviewed policies to tackle AI harms by U.S. agencies, such as the Federal Trade Commission (FTC) and the National Institute of Standards and Technology (NIST). Similarly, we examined crowdsourced tools developed by ML practitioners in the industry and evaluated how well they fit the defined criteria individually. 

Finally, we introduced our framework and described how crowdsourced mechanisms could exist alongside federal policies. Crucially, our framework includes establishing standards for developing and using AI systems through a federal agency (or a collaboration of agencies), methods to certify crowdsourced mechanisms to create trust with users and a method to create and add new policies to the framework. We also demonstrate how our framework satisfies the previously stated criteria. 

Continue reading here.


Comment and let me know what you liked and if you have any recommendations on what I should read and cover next week. You can learn more about my work here. See you soon!

Mati Shufan

Owner, Website Content Services and AI solutions at "Literally"

1y

An insightful point on prompt engineering in section number 2. To add another dimension, I think that the prevalent use, or even overuse, of free versions, such as ChatGPT 3.5, as opposed to more advanced paid versions like ChatGPT4, often leads to skewed user perceptions. Users might encounter limitations, leading to disappointment and abandonment of the tool without truly realizing its potential. I think It underscores an incentive for leading tech companies in AI to address this at a macro level, while simultaneously promoting greater awareness among public entities and the media.

John Marrett

Helping mid-sized organizations increase sales and improve customer service since 1993 | #LinkedInLocal

1y

Abhishek, Mycroft AI Inc has developed an open-source, privacy-enabled voice assistant. More on their website: https://mycroft.ai/

Jonathan DeGange

Lead AI Scientist, Client Technology AI Governance

1y

Wonderful. Where to see the actual details of proposed framework?

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics