Naaia News - August 2024

Naaia News - August 2024

REGULATION IN EUROPE

  • EU: the AI Act entered into force singe August 1st

- A refresh on the AI Act's calendar :

-February 2, 2025: Ban on prohibited AI systems

-August 2, 2025: Requirements for General Purpose AI models

-August 2, 2026: Requirements for all AI systems excepted high risk AI systems already under EU certification regulations

-August 2, 2027: Requirements for high-risk AI systems already under EU certification regulations. 

Next milestone: Multi-stakeholder consultation on trustworthy general-purpose AI models under the AI Act closes on September the 18th and its results will be published not long after. The consultation focuses on three axes: General-purpose AI models: transparency and copyright-related provisions; General-purpose AI models with systemic risk: risk taxonomy, assessment and mitigation; Reviewing and monitoring the Codes of Practice for general-purpose AI models. 


REGULATION WW

  • UN

The Draft "United Nations convention against cybercrime" has been published. The draft Convention necessitates its signing parties to enact a comprehensive, ‘gender-sensitive’ series of measures to combat the cybercrimes identified therein and uphold human rights’ protection. These measures will encompass methods and detailed processes of investigation, prosecution, collecting and sharing evidence, and the establishment of these crimes as criminal offences under their domestic law, subject to specific legal criteria. The convention identifies as cybercrimes, for instance, illegal access to information and communications technology system without right, illegal Interception made by technical means, interference with Electronic Data or interference with an Information and Communications Technology System. They are all listed in the document.

AI has been cited as an amplifier for the overwhelming majority of them – if not all – notably the dissemination of sexual material, which is particularly concerning in cases involving children (CSAM).  The latter are -very alarmingly- on the rise, as noted in Europol’s ‘Internet Organised Crime Threat Assessment (IOCTA) 2024’ report. 

  • United States

  1. Litigation in AI and Copyright is on the rise:  There are 29 ongoing lawsuits for copyright infringement against various AI companies, among which Google, OpenAI, Microsoft, GitHub and more, in the United States alone. Common themes emerge among them: 

- Alleged unauthorized use of copyrighted material (text, source code, images, art works, voices etc.) for AI product training. The plaintiffs are commonly citing this unauthorized use as violation of the Digital Millennium Copyright Act (DMCA), the US principal copyright work law. The DMCA has and will continue to be supplemented or amended by subsequent decisions from the Copyright and Patent Office to respond to AI particularities, as ordered in the Presidential Executive Order. For example the ‘Copyright and Artificial Intelligence’ reports series, whose first part is the ‘Digital Replicas’ chapter. 

- Alleged breach of contract: Violations of the terms of various licenses under which developers or creators had released their materials (notably on sites/repositories hosted by GitHub and Google). Licenses generally require attribution and adherence to certain conditions when using the licensed materials. This touches on the growing issue of attribution and labelling in the AI ecosystem. 

- Alleged violations of the California’s Consumers Protection Act. Although not directly tied to copyright violations, some lawsuits are using the alleged illegal use of the defendants’ client data to train and fine tune their models, as a supporting argument to highlight the misconduct in their AI development practices. US Senator John Hickenlooper has announced plans to introduce the Validation and Evaluation for Trustworthy (VET) AI Act, which aims to establish third-party audits for AI companies. The bill requires the National Institute of Standards and Technology (NIST) to work with federal agencies and stakeholders to develop guidelines for the certification of third-party evaluators. These assessors would independently verify AI companies' compliance with risk management and safety protocols.

2. California’s Senate Bill – SB 1047 titled "Safe and Secure Innovation for Frontier Artificial Intelligence Models Act"

This Bill is a major legislative move to regulate large-scale AI systems, focusing on ensuring safety and mitigating AI-related risks. It applies to powerful AI models costing $100M+ to train, it requires strict safety assessments and "kill switches", and it mandates third-party audits by 2028. It also includes strong whistleblower protections, creates CalCompute, a public cloud cluster to support startups, researchers, and community groups in AI development aligned with California’s values and empowers California’s Attorney General to take legal action if an AI model causes severe harm or poses a significant public safety risk. 

 Major AI companies, OpenAI, Anthropic and Stability AI have opposed strongly the bill, as ‘staggering for innovation’ and ‘threatening for open-source AI innovation’ in particular. After weeks of intense exchanges among the Senate, civil society and industry representatives, the Bill received several amendments : criminal penalties for perjury are replaced with civil penalties, it eliminates the FMD, adjusts legal standards, proposes a new threshold to protect startups’ ability to fine-tune open sourced models and it cuts the Attorney General’s ability to seek civil penalties unless a harm has occurred or there is an imminent threat to public safety. 

3. California's AB 3211 – renew support by the industry 

The Bill, introduced in early 2024 mandates that AI-generated content be labeled with watermarks to clearly identify its synthetic origin and requires developers to embed metadata that specifies which parts of content are AI-generated. It aims to combat misinformation by ensuring transparency about the nature and source of digital content. After having faced increased opposition by major AI industry companies, it passed the California Assembly on August 22, with significant amendments. Notably, the vulnerability notification requirement was completely removed following intense industry lobbying. The requirement held the AI provider responsible to notify the Department of Technology if erroneous or malicious inclusion or removal of provenance information or watermarks was identified. Major AI actors such as Microsoft and OpenAI have expressed their support since. 

  • Hong Kong

The Hong Kong Monetary Authority (HKMA) has launched the "Generative Artificial Intelligence Sandbox," a controlled environment where banks can safely test and develop new uses for generative AI (GenAI) technology, like chatbots and risk management tools, without risking real-world consequences. This sandbox framework includes technical support and oversight from the HKMA, ensuring AI innovations’ responsible use. 

  • Australia

The Australian government has introduced a new AI policy for the public sector, effective September 1, 2024. Developed by the Digital Transformation Agency, the policy aims to lead in safe and ethical AI use by promoting transparency, governance, and risk management. It follows the "enable, engage, and evolve" framework, requiring public agencies to establish clear accountability, including appointing a Responsible AI Officer, adopt AI responsibly, and adapt to technological advancements. Supporting guidelines emphasize collaboration and the ongoing evolution of AI governance. 

  • Türkiye

The Turkish Data Protection Authority issued ‘Recommendations on the Protection of Personal Data in the Field of AI’. The principal recommendations suggest: 

-Adhering to national and international regulations when designing AI technologies. 

-Adopting a data minimization policy, considering ethical implications, and collaborating with impartial experts, in the development process. 

-Maintaining accountability, creating risk matrices specific to each sector, ensuring human oversight in AI decisions, and promoting cooperation among regulatory bodies, on a strategic level. 

-Fostering digital literacy, stakeholder participation, and investment in ethical AI. 

  • Japan

Japan's Defense Ministry has introduced its first basic policy on the use of AI to address manpower shortages and maintain competitiveness with, notably China and the U.S in military technology, in light of increased tensions in the Pacific. The policy outlines seven priority areas for AI application, including target detection, intelligence analysis, and unmanned military assets, with the goal of enhancing decision-making, reducing personnel burdens, and improving operational efficiency. While emphasizing the importance of human oversight in AI use, the policy also acknowledges increased risks by such uses of AI, such as errors and biases.  


TECHNOLOGY 

  • First European high-performance chips to be made in Dresden :  

The European Semiconductor Manufacturing Company (ESMC), a new microchip manufacturing plant planned in Dresden, Saxony, will be the first in the EU to produce high-performance chips, as announced by European Commission’s President Ursula von der Leyen in her visit at ESMC on August the 20th . The factory, designated as a "first-rate facility" under the European Chips Act, will manufacture chips using FinFET (Field-Effect Transistor) technology, which enhances performance while reducing energy consumption. The European Commission has approved €5 billion in German state aid to support the construction and operation of the plant. ESMC, a joint venture between Taiwan Semiconductor Manufacturing Company (TSMC), Bosch, Infineon, and NXP, aims to reach full capacity by 2029, producing 480,000 chips per year for automotive and industrial applications. The plant supports the EU's goal to double its global market share in microchips to 20% by 2030. The Chips Act has already attracted approximately €115 billion in investments.  


LATEST FROM NAAIA

  • Big Data and AI Event : Join us on our booth during the Big Data & AI event in Paris on October 15th & 16th to discuss AI governance and compliance!
  • Human Vulnerability : our blog article deciphers the notion of human vulnerability and explains how the various regulations on AI applicable around the world, including the AI Act, place the human at the centre and respond to these concerns through regulation.
  • AI and Finance: This article explores the different use cases for AI in the financial sector, the associated risks and details the impact of the AI Act and other AI regulations around the world.


Interested to learn more? Subscribe to the newsletter and read our blog.

To view or add a comment, sign in

Explore topics