The October-November 2024 roundup of AI incidents is live! https://lnkd.in/eeYxE28j
Responsible AI Collaborative
Technology, Information and Internet
Los Angeles, California 1,077 followers
Collecting Data to make Artificial Intelligence Safer
About us
The Responsible AI Collaborative is a not-for-profit organization working to present real-world AI harms through our Artificial Intelligence Incident Database. We have answered Santayana's aphorism 'those who cannot remember the past are condemned to repeat it'... with data. We need your help to make AI safer, so collaborate with us by learning about and sharing the latest incidents at https://incidentdatabase.ai/ (public submissions are welcome).
- Website
-
https://incidentdatabase.ai/
External link for Responsible AI Collaborative
- Industry
- Technology, Information and Internet
- Company size
- 2-10 employees
- Headquarters
- Los Angeles, California
- Type
- Nonprofit
- Founded
- 2022
Locations
-
Primary
Los Angeles, California, US
Employees at Responsible AI Collaborative
Updates
-
Responsible AI Collaborative reposted this
AI incident data could provide valuable safety learning to inform policy decisions but existing databases are often inconsistently structured. I made a tool to process raw incident reports using a large language model, and classify the types of risk and severity of harm caused in multiple categories. The aim is to enrich datasets, presenting results graphically through a dashboard so that policymakers can explore trends and patterns to gain insights about the impacts of AI on society, which they could then use to inform policy decisions. I have co-authored a blog-post with Jamie Bernardi, that introduces the tool, discussing the context, motivation, preliminary results and onward plans for the work. Link below (in comments) - please feel free to explore the dashboard yourself - feedback very welcome. The proof-of-concept classifies all incidents in the Responsible AI Collaborative's AI Incident Database using MIT FutureTech's AI Risk Repository causal and domain taxonomies, then assigns harm severity scores to each incident in 10 different harm categories based on the Center for Security and Emerging Technology (CSET) AI Harm Taxonomy. We are careful to qualify preliminary findings with the caveat that the input dataset was reliant on voluntary reporting and is therefore subject to bias and not representative of all real-world incidents. The tool can help identify gaps in existing incident data to inform how new reporting processes should be designed. Our next step is to thoroughly understand 'user-stories' from policy professionals to identify the specific questions that this kind of tool could address. This will help us generate a set of requirements for the next iteration. Together with the results of a validation study which will demonstrate the capabilities and limitations of the approach, this will provide insights into how policymakers could use the tool in practice. Huge thanks to Jamie for the directional support on the project from the outset, all the constructive input on the write-up and patience getting the details right.
-
The August-September 2024 Incident Roundup is available at the AI Incident Database: https://lnkd.in/eRm_AMKf. We have been tracking historical and rising incidents in political misinformation, deepfakes, and AI-driven fraud. We welcome new reports!
AI Incident Roundup – August and September 2024
incidentdatabase.ai
-
The monthly incident roundup for July is available! https://lnkd.in/eREhtrdy
AI Incident Roundup – July 2024
incidentdatabase.ai
-
The monthly AI incident roundup for June 2024 is now available! https://lnkd.in/eVPvhANS
AI Incident Roundup – June 2024
incidentdatabase.ai
-
The AI Incident Roundup for May 2024 is live -- see our post for a summary and a listing of all the latest incident IDs added to the AI Incident Database: https://lnkd.in/eX8PDq97.
AI Incident Roundup – May 2024
incidentdatabase.ai
-
The AI Incident Roundup for April 2024 is live, focusing especially on the disconcerting proliferation of deepfakes and disinformation: https://lnkd.in/ehbM3jaR
AI Incident Roundup – April ‘24
incidentdatabase.ai
-
This retrospective on March’s incidents in the AIID includes misuse of deepfake technology and issues of personal safety and privacy among others. https://lnkd.in/gN8BKifb
AI Incident Roundup – March ‘24
incidentdatabase.ai
-
Lots of new AI incidents added to the database in February. Check out our monthly roundup here: https://lnkd.in/gmw-gmkM
AI Incident Roundup – February ‘24
incidentdatabase.ai
-
Exciting announcement: the Digital Safety Research Institute of UL Research Institutes is partnering with the Responsible AI Collaborative to advance the AI Incident Database and its use for preventing AI incidents and public harms. https://lnkd.in/dDmQCwK6
Researching AI Incidents to Build a Safer Future: The Digital Safety Research Institute partners with the Responsible AI Collaborative
incidentdatabase.ai