🤖AI Boost Weekly (June 11-18)
www.w3brew.com

🤖AI Boost Weekly (June 11-18)

Welcome to this week's edition of AI Boost! Our newsletter is dedicated to bringing you the latest updates and insights on AI policy and regulation. In this issue, we're covering a range of critical developments from June 11th to June 18th.

If you enjoy my work and want to support it, please subscribe to W3brew!

You'll find highlights from :

  • 🌐 G7 Leaders Commit to "Safe, Secure, and Trustworthy AI"
  • 🇪🇸 Spain Unveils Detailed AI Strategy 2024 🌟
  • 🇧🇷 Brazil Embraces AI Regulation as Government Turns to OpenAI 🤖
  • 🇰🇷 South Korea's Privacy Commission Inspects AI Services 🔍
  • 🏛️ Steve Scalise Says House Republicans Want No New AI Regulations 🚫
  • 📑 Regulators and AI: Recommendations from the GFI and Center for American Progress Report
  • 🇪🇺 EU Sets Out Plan for Migration and Asylum Pact Implementation


🌐 G7 Leaders Commit to "Safe, Secure, and Trustworthy AI"

In the Apulia Leaders' Communiqué, G7 leaders reaffirmed their commitment to promoting "safe, secure, and trustworthy AI" aligned with democratic values and human rights. They emphasized the need for coordinated governance frameworks and international standards for AI development and deployment.

Key Highlights:

  • AI Toolkit for Governments: Italy promoted a toolkit to encourage AI use in public administration.
  • Support for Emerging Economies: AI initiatives aim to assist emerging economies, particularly in Africa, to achieve development goals.
  • AI and Labor: The communiqué calls for action plans to leverage AI for decent work, workers' rights, and access to reskilling and upskilling.
  • Military AI Use: G7 leaders endorsed the "Political Declaration on Responsible Military Use of AI and Autonomy (REAIM)" ensuring AI's military applications comply with international law.
  • Justice Sector: AI must not interfere with judges' decision-making power or judicial independence.
  • AI Hub for Sustainable Development: A new initiative to support AI ecosystems in developing countries.
  • Global Standards: The summit promoted the Hiroshima AI Process and the Vatican's ethical guidelines for AI.

Pope Francis, addressing the G7 for the first time, warned of losing control over AI and stressed that "no machine should ever choose to take the life of a human being."

Key Points:

  • Techno-Human Condition: The Pope emphasized the intrinsic connection between humans and their tools, including AI, which should enhance rather than replace human decision-making.
  • Decision-Making: AI can make "algorithmic choices," but humans possess wisdom and moral judgement, crucial for significant decisions.
  • Ethical Algorithms: AI algorithms are not neutral and must be designed to respect human dignity and potential.

My Two Cents:

The G7 summit in Italy had a surprising and notable participant: Pope Francis. This was the first G7 summit to feature a pope as an invited participant. Many were taken aback when he brought up AI, but it made perfect sense given his experiences.

Remember that viral image of him in a white puffer coat? It was an AI-generated fake that sparked debates about deepfakes and misinformation. Having faced the negative sides of AI firsthand, it was natural for him to address this issue.

In his New Year's message, Pope Francis highlighted the potential for AI to promote peace. This continues his ongoing efforts with tech giants like Microsoft to advocate for ethical AI development through the Rome Call for AI Ethics, established in 2020. Gregory Allen from the Center for Strategic and International Studies noted Italy's efforts to promote these ethical guidelines and gather more support.

The Pope’s presence at the G7 summit underscored the crucial need for AI to serve humanity positively and the importance of international collaboration to ensure AI development respects human dignity.


🇪🇸 Spain Unveils Detailed AI Strategy 2024 🌟

The Spanish Government has published additional information on its AI Strategy 2024, approved on May 15, 2024, to be implemented during 2024-2025. This strategy aims to transform Spain into a leader in AI, focusing on three main axes: strengthening AI deployment throughout the economy, facilitating AI application in the public and private sectors, and promoting transparent, responsible, and humanistic AI.

Key Highlights:

  • Strengthening AI Deployment: The strategy aims to enhance AI capabilities across the economy, focusing on supercomputing, cloud infrastructures, AI language models, and talent development.
  • Facilitating AI Application: The government plans to boost AI adoption in both public and private sectors. This includes launching pilot AI projects through a state innovation lab and developing a common data governance model. Programs like Kit Consulting and Kit Digital will support the private sector in adopting AI solutions.
  • Promoting Transparent and Responsible AI: The strategy emphasizes creating a broad social consensus on AI usage, its limits, and human-AI interactions. The Spanish AI Supervision Agency (AESIA), established in August 2023, will play a crucial role in achieving this by ensuring AI is used ethically and safely.

Special Focus on Language Models:

Spain plans to create a language model specifically trained in Spanish and co-official languages to reduce biases and improve practical applications. This model, named ALIA, will provide an open, public, and accessible language infrastructure. With more than 20% of its training in Spanish and co-official languages, ALIA aims to support intelligent assistants, conversational systems, and content generation models.

Cybersecurity Initiatives:

The Spanish Government is also developing a cybersecurity law to create a clear national framework for cybersecurity. The National Cybersecurity Institute (INCIBE) will focus on innovation, collaboration, and AI adoption in the cybersecurity domain.

Applications Across Various Sectors:

The AI Strategy 2024 outlines potential AI applications in multiple sectors:

  • Healthcare: Modernizing healthcare by analyzing medical tests, identifying patterns, detecting rare diseases, and aiding drug design.
  • Education: Developing personalized, multidisciplinary educational content accessible to all students, regardless of location.
  • Environmental: Enhancing climate change efforts through efficient transport systems, energy management, and building climate efficiency.
  • Administration and Businesses: Generating new products and services through AI to boost economic growth.

European Leadership in AI:

On March 13, 2024, the European Parliament approved the EU AI Regulation, a historic law driven during Spain's presidency of the EU Council. This regulation positions Europe as a leader in AI innovation while safeguarding citizens' rights, including setting limits on biometric identification systems.

Funding and Support:

The AI Strategy 2024 is backed by 1.5 billion euros, primarily from the Recovery, Transformation, and Resilience Plan and its addendum, adding to the 600 million euros already mobilized. This investment aims to develop and expand AI usage transparently and ethically across Spain.


🇧🇷 Brazil Embraces AI Regulation as Government Turns to OpenAI 🤖

Brazil is making headlines in the AI space, but not without raising eyebrows. The country's Temporary Commission for AI released a report proposing a consolidated text for an AI bill. The report highlights the alignment on principles such as transparency, safety, trustworthiness, and non-discrimination among most AI bills in Brazil.

Key Highlights:

  • Rights-Based Approach: The proposed legislation includes risk-based regulation and imposes administrative sanctions to ensure AI technologies are used responsibly.
  • AI and the Legal System: In a controversial move, the Brazilian government has hired OpenAI, the creator of ChatGPT, to help reduce the costs of court battles. This decision aims to expedite the screening and analysis of thousands of lawsuits using AI.

Critics argue that relying on OpenAI to screen lawsuits could lead to a lack of transparency and accountability in the legal system. While the government claims that AI will merely assist human employees, skeptics worry that this could erode human oversight in critical decision-making processes.

Why It Matters:

Court-ordered debt payments have significantly impacted Brazil's federal budget, with the government estimating a spend of 70.7 billion reais ($13.2 billion) next year on judicial decisions. The AI service will help flag the need for government action on lawsuits before final decisions, aiming to save costs and improve efficiency.

What’s Next:

Brazil’s AI strategy includes creating a legal framework that addresses data privacy and sovereignty concerns while promoting the responsible use of AI. The government plans to work closely with OpenAI, with all activities fully supervised by human officials to ensure accuracy and ethical compliance.


🇰🇷 South Korea's Privacy Commission Inspects AI Services 🔍

South Korea's Personal Information Protection Commission (PIPC) recently announced the results of its inspection into major AI application services. The inspections aimed to preemptively check for vulnerabilities and provide recommendations where personal data protection laws were violated.

Key Highlights:

  • SK Telecom/Adot: For Adot's call recording service, the PIPC recommended corrective actions like minimizing storage of text files and strengthening de-identification processes to ensure users clearly understand the service contents.
  • Snow: Regarding Snow's AI image editing app, the PIPC instructed Snow to make it clearer when users' personal data is sent to servers. They also advised ensuring external software development kits (SDKs) used for image filtering are thoroughly reviewed for safety.
  • DeepL: DeepL’s translation service had been using customer texts for AI learning without proper disclosure. The PIPC made no recommendation after DeepL added notifications about this practice.
  • Vuno: The PIPC found no violations with Vuno's medical imaging AI, which only uses pre-approved data from institutional review boards and data review committees.

Why It Matters:

The PIPC's preliminary inspections are significant as they highlight the importance of proactively addressing personal information vulnerabilities amidst the rapid growth of AI technologies. Ensuring data protection is critical for maintaining user trust and complying with international standards.

What’s Next:

The PIPC plans continued monitoring of AI application services to develop comprehensive personal information protection measures. This ongoing oversight aims to ensure the safe and ethical use of personal data in AI technologies.


🏛️ Steve Scalise Says House Republicans Want No New AI Regulations 🚫

House Majority Leader Steve Scalise made headlines this week by declaring that House Republicans are against passing any new AI-related regulations. This stance marks a significant position on one of the most crucial issues in tech policy today.

Key Highlights:

  • No New Regulations: Scalise emphasized that Congress should not introduce new regulations for AI. He argues that existing laws should be reviewed for gaps, but the innovation seen in the tech sector should continue without heavy-handed government intervention.
  • Meeting with Republicans: During a meeting with Republicans on the AI task force, Scalise outlined that there would be no support for legislation that sets up new agencies, establishes new licensing requirements, allocates funds for AI R&D, or favors certain technologies.
  • Exclusive Interview: In an exclusive interview after the meeting, Scalise stated, “We just want to make sure we don’t have government getting in the way of the innovation that’s happening. That’s allowed America to be dominant in the technology industry, and we want to continue to be able to hold that advantage going forward.”

Why It Matters:

This position highlights a significant divide in Congress on AI regulation. While Scalise and House Republicans favor minimal government intervention to foster innovation, other voices, such as Sen. John Hickenlooper (D-Colo.), advocate for new rules to manage AI’s rapid growth and potential risks.

What’s Next:

If you are looking for Congress to quickly adopt new legislation to regulate AI, you shouldn’t hold your breath as long as House Republicans maintain their majority. The focus will likely remain on ensuring the government does not impede the tech industry's progress.

Republican Perspective:

House Republicans appear poised to fight AI regulation, expressing concerns about the potential of stifling innovation. In a closed-door meeting Thursday, committee chairs and GOP members of the House’s AI task force voiced their resistance to supporting new AI regulations.

Additional Context:

  • Legislative Resistance: Republicans indicated they wouldn't support legislation that creates new agencies, establishes new licensing requirements, allocates funds for research and development, or favors one technology over another.
  • Innovation vs. Regulation: The GOP stance is clear: stand with innovators and resist what they see as Democrats' pursuit of government overreach on AI. “We just want to make sure we don’t have government getting in the way of the innovation that’s happening,” Scalise told Punchbowl News.
  • Executive Action and Bipartisan Efforts: Conservative lawmakers raised concerns about President Biden’s executive actions on AI and the recent Senate bipartisan working group report on AI, which suggested a policy roadmap for AI regulation and investment.

Broader Implications:

The debate over AI regulation is not just about managing technological advancement but also about balancing innovation with ethical considerations and societal impacts. AI has the potential to revolutionize industries, but it also poses risks such as job displacement, bias, and privacy invasion.


📑 Regulators and AI: Recommendations from the GFI and Center for American Progress Report

In a recent fact sheet, Governing for Impact (GFI) and the Center for American Progress outlined how the U.S. Department of Housing and Urban Development (HUD) and other housing regulators should address potential AI risks to housing fairness and discrimination using existing statutory authorities.

Key Recommendations:

Fair Housing Act (FHA):

  • Update Fair Housing Advertising Guidelines: Clarify Section 804(c)’s prohibition against discrimination in housing advertisements that rely on algorithmic tools or data.
  • Accountability for AI in Advertising: Ensure that companies providing AI-based advertising services are liable for discriminatory practices.

Dodd-Frank Act:

  • Automated Valuation Model (AVM) Rule: Continue rulemaking to include all mortgage lenders, especially nonbanks, and establish minimum standards for AVMs.
  • Disclosure and Alternatives: Require companies to disclose AVM usage to customers and allow alternatives to automated appraisals.


🇪🇺 EU Sets Out Plan for Migration and Asylum Pact Implementation

EU presents plan to roll out new migration pact - InfoMigrants

The European Commission has unveiled the Common Implementation Plan for the Pact on Migration and Asylum, aiming for full implementation by June 2026.

Key Highlights:

  • Eurodac System: Enhances solidarity and responsibility rules.
  • Border Management: Efficient procedures for asylum and return.
  • Reception Standards: Improved living conditions and healthcare.
  • Asylum Procedures: Streamlined, fair, and efficient processes.
  • Return Procedures: Ensures effective return of those without the right to stay.
  • Solidarity Mechanism: Legally binding support among EU countries.
  • Crisis Response: Enhanced resilience and preparedness.
  • Safeguards: Increased protection for vulnerable applicants.
  • Integration: Focus on resettlement and inclusion efforts.

Why It Matters: The Pact on Migration and Asylum represents a unified European approach to managing migration sustainably and fairly. It emphasizes solidarity among EU countries and ensures the protection of fundamental rights for migrants.

My Take: The plan underscores the EU’s commitment to a balanced migration system. However, the success of this initiative hinges on effective national implementation and continuous international cooperation. The inclusion of safeguards and crisis response mechanisms reflects a proactive approach to potential challenges. Let's see if this will indeed create a more equitable and efficient migration system in Europe.

What’s Next:

The legal instruments of the Pact entered into force on June 11, 2024, and will come into application in June 2026. The focus now is on designing and implementing national plans to make the Pact a reality by the target date.


That's a wrap for this week's AI Boost! The landscape of AI policy and regulation is evolving rapidly, and it's crucial to stay informed about these developments.

👀 What to Watch For: Keep your eyes on this space for continuous updates and in-depth analysis of AI policy trends. For more insights and to stay ahead in the rapidly evolving world of technology and policy, don't forget to subscribe at Web3 Brew. Let’s keep blending tech into our world and shaping a thoughtful digital future! 🌐 Until next week, keep sipping on that tech brew! 🍹

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics