🤖AI Boost Weekly (June 11-18)
Welcome to this week's edition of AI Boost! Our newsletter is dedicated to bringing you the latest updates and insights on AI policy and regulation. In this issue, we're covering a range of critical developments from June 11th to June 18th.
If you enjoy my work and want to support it, please subscribe to W3brew!
You'll find highlights from :
🌐 G7 Leaders Commit to "Safe, Secure, and Trustworthy AI"
In the Apulia Leaders' Communiqué, G7 leaders reaffirmed their commitment to promoting "safe, secure, and trustworthy AI" aligned with democratic values and human rights. They emphasized the need for coordinated governance frameworks and international standards for AI development and deployment.
Key Highlights:
Pope Francis, addressing the G7 for the first time, warned of losing control over AI and stressed that "no machine should ever choose to take the life of a human being."
Key Points:
My Two Cents:
The G7 summit in Italy had a surprising and notable participant: Pope Francis. This was the first G7 summit to feature a pope as an invited participant. Many were taken aback when he brought up AI, but it made perfect sense given his experiences.
Remember that viral image of him in a white puffer coat? It was an AI-generated fake that sparked debates about deepfakes and misinformation. Having faced the negative sides of AI firsthand, it was natural for him to address this issue.
In his New Year's message, Pope Francis highlighted the potential for AI to promote peace. This continues his ongoing efforts with tech giants like Microsoft to advocate for ethical AI development through the Rome Call for AI Ethics, established in 2020. Gregory Allen from the Center for Strategic and International Studies noted Italy's efforts to promote these ethical guidelines and gather more support.
The Pope’s presence at the G7 summit underscored the crucial need for AI to serve humanity positively and the importance of international collaboration to ensure AI development respects human dignity.
🇪🇸 Spain Unveils Detailed AI Strategy 2024 🌟
The Spanish Government has published additional information on its AI Strategy 2024, approved on May 15, 2024, to be implemented during 2024-2025. This strategy aims to transform Spain into a leader in AI, focusing on three main axes: strengthening AI deployment throughout the economy, facilitating AI application in the public and private sectors, and promoting transparent, responsible, and humanistic AI.
Key Highlights:
Special Focus on Language Models:
Spain plans to create a language model specifically trained in Spanish and co-official languages to reduce biases and improve practical applications. This model, named ALIA, will provide an open, public, and accessible language infrastructure. With more than 20% of its training in Spanish and co-official languages, ALIA aims to support intelligent assistants, conversational systems, and content generation models.
Cybersecurity Initiatives:
The Spanish Government is also developing a cybersecurity law to create a clear national framework for cybersecurity. The National Cybersecurity Institute (INCIBE) will focus on innovation, collaboration, and AI adoption in the cybersecurity domain.
Applications Across Various Sectors:
The AI Strategy 2024 outlines potential AI applications in multiple sectors:
European Leadership in AI:
On March 13, 2024, the European Parliament approved the EU AI Regulation, a historic law driven during Spain's presidency of the EU Council. This regulation positions Europe as a leader in AI innovation while safeguarding citizens' rights, including setting limits on biometric identification systems.
Funding and Support:
The AI Strategy 2024 is backed by 1.5 billion euros, primarily from the Recovery, Transformation, and Resilience Plan and its addendum, adding to the 600 million euros already mobilized. This investment aims to develop and expand AI usage transparently and ethically across Spain.
🇧🇷 Brazil Embraces AI Regulation as Government Turns to OpenAI 🤖
Brazil is making headlines in the AI space, but not without raising eyebrows. The country's Temporary Commission for AI released a report proposing a consolidated text for an AI bill. The report highlights the alignment on principles such as transparency, safety, trustworthiness, and non-discrimination among most AI bills in Brazil.
Key Highlights:
Critics argue that relying on OpenAI to screen lawsuits could lead to a lack of transparency and accountability in the legal system. While the government claims that AI will merely assist human employees, skeptics worry that this could erode human oversight in critical decision-making processes.
Why It Matters:
Court-ordered debt payments have significantly impacted Brazil's federal budget, with the government estimating a spend of 70.7 billion reais ($13.2 billion) next year on judicial decisions. The AI service will help flag the need for government action on lawsuits before final decisions, aiming to save costs and improve efficiency.
What’s Next:
Brazil’s AI strategy includes creating a legal framework that addresses data privacy and sovereignty concerns while promoting the responsible use of AI. The government plans to work closely with OpenAI, with all activities fully supervised by human officials to ensure accuracy and ethical compliance.
Recommended by LinkedIn
🇰🇷 South Korea's Privacy Commission Inspects AI Services 🔍
South Korea's Personal Information Protection Commission (PIPC) recently announced the results of its inspection into major AI application services. The inspections aimed to preemptively check for vulnerabilities and provide recommendations where personal data protection laws were violated.
Key Highlights:
Why It Matters:
The PIPC's preliminary inspections are significant as they highlight the importance of proactively addressing personal information vulnerabilities amidst the rapid growth of AI technologies. Ensuring data protection is critical for maintaining user trust and complying with international standards.
What’s Next:
The PIPC plans continued monitoring of AI application services to develop comprehensive personal information protection measures. This ongoing oversight aims to ensure the safe and ethical use of personal data in AI technologies.
🏛️ Steve Scalise Says House Republicans Want No New AI Regulations 🚫
House Majority Leader Steve Scalise made headlines this week by declaring that House Republicans are against passing any new AI-related regulations. This stance marks a significant position on one of the most crucial issues in tech policy today.
Key Highlights:
Why It Matters:
This position highlights a significant divide in Congress on AI regulation. While Scalise and House Republicans favor minimal government intervention to foster innovation, other voices, such as Sen. John Hickenlooper (D-Colo.), advocate for new rules to manage AI’s rapid growth and potential risks.
What’s Next:
If you are looking for Congress to quickly adopt new legislation to regulate AI, you shouldn’t hold your breath as long as House Republicans maintain their majority. The focus will likely remain on ensuring the government does not impede the tech industry's progress.
House Republicans appear poised to fight AI regulation, expressing concerns about the potential of stifling innovation. In a closed-door meeting Thursday, committee chairs and GOP members of the House’s AI task force voiced their resistance to supporting new AI regulations.
Additional Context:
Broader Implications:
The debate over AI regulation is not just about managing technological advancement but also about balancing innovation with ethical considerations and societal impacts. AI has the potential to revolutionize industries, but it also poses risks such as job displacement, bias, and privacy invasion.
📑 Regulators and AI: Recommendations from the GFI and Center for American Progress Report
In a recent fact sheet, Governing for Impact (GFI) and the Center for American Progress outlined how the U.S. Department of Housing and Urban Development (HUD) and other housing regulators should address potential AI risks to housing fairness and discrimination using existing statutory authorities.
Key Recommendations:
Fair Housing Act (FHA):
Dodd-Frank Act:
🇪🇺 EU Sets Out Plan for Migration and Asylum Pact Implementation
The European Commission has unveiled the Common Implementation Plan for the Pact on Migration and Asylum, aiming for full implementation by June 2026.
Key Highlights:
Why It Matters: The Pact on Migration and Asylum represents a unified European approach to managing migration sustainably and fairly. It emphasizes solidarity among EU countries and ensures the protection of fundamental rights for migrants.
My Take: The plan underscores the EU’s commitment to a balanced migration system. However, the success of this initiative hinges on effective national implementation and continuous international cooperation. The inclusion of safeguards and crisis response mechanisms reflects a proactive approach to potential challenges. Let's see if this will indeed create a more equitable and efficient migration system in Europe.
What’s Next:
The legal instruments of the Pact entered into force on June 11, 2024, and will come into application in June 2026. The focus now is on designing and implementing national plans to make the Pact a reality by the target date.
That's a wrap for this week's AI Boost! The landscape of AI policy and regulation is evolving rapidly, and it's crucial to stay informed about these developments.
👀 What to Watch For: Keep your eyes on this space for continuous updates and in-depth analysis of AI policy trends. For more insights and to stay ahead in the rapidly evolving world of technology and policy, don't forget to subscribe at Web3 Brew. Let’s keep blending tech into our world and shaping a thoughtful digital future! 🌐 Until next week, keep sipping on that tech brew! 🍹