🚀 Weekly AI Policy Extravaganza (May 28- June 4) 🌍
Artificial intelligence is quickly becoming an indispensable asset in addressing a range of challenges in today’s society – from domestic and international cyber threats to healthcare advancements and environmental management. While there's no shortage of opinions on AI's impact, one thing is clear: we need robust, flexible policies to harness its full potential. In this newsletter, we'll dive into global efforts to regulate AI, highlight key legislative moves, and discuss emerging challenges and opportunities.
For more insights and to stay ahead in the rapidly evolving world of technology and policy, don't forget to subscribe at Web3 Brew. Let's get started!
One step back:
Key Points in Global AI Policy
Efforts and innovations must be coordinated globally. A leader is needed to harmonize these efforts, avoiding a confusing system of disparate AI regulations.
Examples of Global Efforts:
🇺🇸 AI Lobbying Surges as U.S. Moves Toward New Regulations
The number of lobbyists focusing on AI issues surged in 2023 as the federal government considered new AI regulations, according to Public Citizen.
Key Numbers:
This spike aligns with the Biden administration’s executive order on AI, leading to increased lobbying activity, especially at the White House. The number of lobbyists engaging with the White House rose 188% in 2023, from 322 in Q1 to 931 in Q4.
Industry Involvement:
While the tech industry is the most active, it only accounts for 20% of AI lobbyists. Other sectors involved include financial services, education, transportation, defense, media, and healthcare.
What’s Next?
Public Citizen warns against industry self-regulation, emphasizing the need for strong, public-centered AI policies to ensure AI benefits everyone, not just major players. Expect lobbyist engagement to continue rising in 2024 as federal agencies implement new AI policies and Congress debates further proposals.
🌐🛡️ UN Highlights Human Rights Risks with Generative AI
The UN Human Rights Office has just released a crucial supplement on the human rights risks tied to generative AI. It is a supplement to the UN B-Tech Project’s foundational paper and a wake-up call on how generative AI can impact internationally agreed human rights.
So, what are these risks? 🤔
The UN document also points out that these risks are often more severe for vulnerable groups, especially women and girls. Generative AI isn’t just expanding existing risks; it’s creating entirely new ones!
Looking ahead, the report warns of more risks emerging as the technology evolves. It stresses the need to identify, prevent, and mitigate these human rights harms effectively.
What do you think? Are we prepared to tackle these challenges? 💡
🇺🇸 DoJ Charges Man for Creating Child Sexual Abuse Material Using Generative AI
The U.S. Department of Justice charged Steven Anderegg, 42, from Wisconsin, for using the AI image generator Stable Diffusion to create thousands of realistic child sexual abuse images. This is a landmark case that brings attention to the serious human rights risks tied to generative AI.
Key Points:
Why It Matters:
The DoJ’s action aligns with the UN’s concerns about AI-related human rights risks. This case could set a precedent for how AI-generated content is regulated and prosecuted.
What’s Next?
Expect more scrutiny and possibly new regulations around the use of generative AI. How do you think this will impact the development and deployment of AI technologies?
🇺🇸 California Advances Measures Targeting AI Discrimination and Deepfakes
California lawmakers are making big moves on AI! They’re pushing forward several proposals aimed at protecting jobs, building public trust, fighting algorithmic discrimination, and banning deepfakes involving elections or pornography.
Key Points:
Fighting AI Discrimination and Building Public Trust
Protecting Jobs and Likeness
Regulating Powerful Generative AI Systems
Banning Deepfakes Involving Politics or Pornography
Recommended by LinkedIn
Why It Matters:
California, home to many AI giants, is setting the stage for nationwide AI regulations. The state is learning from past mistakes with social media and aims to balance attracting AI companies with ensuring responsible AI use
California is positioning itself as a leader in AI regulation with ambitious bills targeting biased algorithms, election disinformation, and protecting digital likenesses. Governor Gavin Newsom has not taken a public stance on these bills yet but emphasized balancing innovation with potential AI risks.
🇪🇺 EU Creates AI Office, EDPB Warns on Facial Recognition
The European Commission has launched the AI Office, a new regulatory body tasked with enforcing the EU's groundbreaking AI Act. The AI Office will oversee high-risk "general-purpose AI models," including those powering systems like ChatGPT.
Key Functions of the AI Office:
In parallel, the European Data Protection Board (EDPB) has issued an opinion on the use of facial recognition technology in travel. The EDPB emphasized that "individuals should have maximum control over their own biometric data" in AI systems.
EDPB's Concerns and Recommendations:
The EU's proactive stance on AI regulation and data protection aims to balance technological innovation with ethical and legal safeguards, ensuring a responsible approach to AI development and deployment.
🇪🇺 EU Needs to Up Its AI Game, Auditors Say
The European Commission needs to invest more in AI to keep up with the US and China, according to a new report by the European Court of Auditors (ECA). Despite having new AI regulations, the Commission isn’t coordinating well with member states or tracking investments effectively.
Key Findings:
ECA member Mihails Kozlovs put it bluntly: “Big, focused AI investments are crucial for EU economic growth. In the AI race, the winner takes it all. The EU needs to step up, join forces, and unlock its AI potential.”
AI Investments: How the EU Stacks Up
AI adoption varies across the EU. France and Germany are leading with the biggest public AI investments. Just last week, French President Emmanuel Macron announced a €400 million investment to boost AI research across nine universities. The EU’s goal is for 75% of firms to use AI by 2030, hoping this tech will boost productivity and tackle societal challenges.
🇸🇬 Singapore Launches AI Governance Framework and Testing Toolkit
Singapore's Infocomm Media Development Authority (IMDA) rolled out the "Model AI Governance Framework for Generative AI." This framework aims to tackle the risks and challenges tied to the development and use of generative AI. It builds on Singapore's earlier AI governance efforts and adds new layers of oversight.
Key Dimensions of the Framework:
The framework stresses global collaboration, aiming to create a "Digital Commons" where common rules allow equal opportunities for all. What impact could this global approach have on AI development?
AI Verify Project Moonshot
Alongside the framework, Singapore’s Ministry for Communications and Information introduced AI Verify Project Moonshot. This open-source toolkit addresses security and safety issues in large language models (LLMs). It integrates red-teaming, benchmarking, and baseline testing into one user-friendly platform. How might this toolkit improve the reliability and safety of generative AI?
Singapore’s proactive stance sets a strong example in AI governance. Will other countries follow suit, and how might this shape the future of AI policy?
🇯🇵 Japan Provides Guidelines on Using Copyrighted Material for AI
Japan's Copyright Office has issued a "General Understanding on AI and Copyright" to clarify how the nation's Copyright Act applies to generative AI technologies.
Key Points:
Asian regulators are increasingly proactive in providing clear policy documents for AI governance, as seen with Japan and Singapore.
📑 Suggested Further Reading:
Colorado and EU AI Laws Raise Several Risks for Tech Businesses by Lena Kempe, LK Law Firm
Related Stories:
Key Topics:
Comparative Analysis: Attorney Lena Kempe compares the AI acts of the EU and Colorado, highlighting how both laws impose high-risk AI requirements for businesses. These comprehensive AI laws address responsible development and deployment, with extraterritorial effects for companies operating in or targeting these markets.
High-Risk Systems: Both laws target high-risk AI systems, focusing on preventing algorithmic discrimination (Colorado) and addressing health, safety, and fundamental rights risks (EU).
Provider/Developer Obligations: Developers and providers of high-risk AI systems have significant responsibilities under both laws, including maintaining and modifying AI systems within the defined high-risk criteria.
For a detailed comparison and actionable insights, check out the full article by Lena Kempe.
👀 What to Watch For: Keep your eyes on this space for continuous updates and in-depth analysis of AI policy trends. For more insights and to stay ahead in the rapidly evolving world of technology and policy, don't forget to subscribe at Web3 Brew. Let’s keep blending tech into our world and shaping a thoughtful digital future! 🌐 Until next week, keep sipping on that tech brew! 🍹
Partner @ ARC Law Firm| Columnist, Mentor, Blockchain Lawyer
6moThank you very much, Nesibe, for the excellent work. I appreciate your efforts and your skillful writing!