Key AI developments in February 2024

Key AI developments in February 2024

Keeping up with developments in AI legislation, regulation, investigations, penalties, legal action, and incidents is no easy task, which is why we have created Tracked, a bite-size version of our global AI Tracker. Each month, Holistic AI's policy team provides you with a roundup of the key responsible AI developments around the world from the past month to keep you up to date with the ever-evolving landscape.


Europe

1. Presidency of the Council sends EU AI Act draft to the Coreper I

  • On 9 December 2023, a provisional agreement on the EU AI Act was reached, marking the end of intense Trilogue negotiations for the AI Act
  • Additional progress was made on 26 January 2024, when, after some finetuning of the details, the Presidency of the Council sent a draft prepared according to this political agreement to the Coreper I (Committee of the Permanent Representatives), a preparatory body for the Council
  • The draft was unanimously endorsed on 2 February by the Coreper I, which is composed of deputy permanent representatives of the EU Member States
  • While not an official Council approval, this unanimous approval by the Coreper I often signals an imminent formal nod, bypassing further debate unless a Member State intervenes

2. EU AI Act endorsed at the Committee level

  • Following the Coreper I endorsement, MEPs of the Internal Market and Civil Liberties Committees voted to endorse the provisional agreement on the EU AI Act on 13 February
  • Next steps are adoption by the European Parliament, which is expected on 10-11 April 2024, and then ministry-level approval from the European Council. Check out the timeline below for next steps.

3. DSA fully enforced from 17 February 2024

  • The Digital Services Act (DSA) aims to ensure safe online environments for users by introducing comprehensive accountability and transparency regulations for digital services and platforms operating in the European Union
  • The DSA imposes special obligations with audits on Very Large Online Platforms (VLOPs) and Very Large Search Engines (VLOSEs) to ensure safe online environments, which came into effect from late August 2023
  • Notwithstanding the earlier enforcement date for designated VLOPs and VLOSEs, the DSA became fully enforceable from 17 February 2024, from when the rules apply to all regulated entities within the scope of the law. See the timeline for enforcement below

4.  UK publishes introduction to AI Assurance 

  •  On 12 February, the UK government’s Department for Science, Innovation & Technology (DSIT) published a guide on AI Assurance to act as an accessible resource for industry and regulators on how to build and deploy AI responsibly
  • The guide sets out an AI Assurance Toolkit to help organizations evaluate their systems and align them with relevant regulatory principles
  • The toolkit sets out mechanisms for measurement, evaluation, communication, and assurance processes such as risk assessments and bias audits, as well as relevant standards
  • The Introduction to AI Assurance can be seen as complementary to the CDEI/DSIT AI Assurance Portfolio, which provides a catalogue of Assurance tools for those designing, developing, deploying, or procuring AI tools 

United States

5. Federal Artificial Intelligence Environmental Impact Act 2024 proposed

  • Introduced on 1 February 2024, the Environmental Impacts Act mandates the Administrator of the Environmental Protection Agency (EPA) to conduct a comprehensive study, in collaboration with relevant authorities, on the environmental effects of AI within two years of the bill's enactment. The study results will be submitted to Congress and made public
  • Additionally, the Director of NIST is required to establish a consortium to examine these environmental impacts and develop a voluntary reporting system for entities to report on the environmental effects of AI
  • Guidelines will be established for reporting entities, covering aspects such as energy consumption, water usage, pollution, and electronic waste associated with the entire life cycle of AI models and hardware
  •  These guidelines will also consider both positive and negative impacts of AI use, as determined by the Director

6. Oklahoma introduces the Ethical Artificial Intelligence Act

  •  Similar to the Rhode Island Ethical AI Act (H7521) and Illinois’ HB5322, Oklahoma’s HB 3835 – first read on 5 February – introduces regulations concerning the use of artificial intelligence, particularly automated decision tools, mandating deployers and developers to conduct annual impact assessments
  • Developers must disclose limitations and data used in automated decision tools to deployers and must publicly share policies on automated decision tools and their management of discrimination risks
  • The law would be enforced by the Attorney General, with a 45-day notice period for alleged violations, although harmed parties can file complaints and civil actions against deployers for algorithmic discrimination
  • Small-scale deployers with fewer than fifty employees are exempt unless their automated decision tools impact more than 999 people annually

7. Protecting Innovation in Investment Act SB3735 introduced

  •  On 6 February 6 2024, federal SB3735 was introduced to override the U.S. Securities and Exchange Commission’s proposed rule stopping AI technology investing was introduced
  • Senators argue that the SEC should first prove that it can handle tech before introducing restrictions given that the rule would cover everything from simple spreadsheets to AI, making compliance challenging
  • The Protecting Innovation in Investment Act aims to prevent the SEC from enforcing this rule or any similar ones, keeping investing accessible and affordable

8. Federal Communications Commission makes AI Generated Voice Calls in Robocalls Illegal

  • On 8 February 2024, the FCC made voice cloning technology used in common robocall scams targeting consumers illegal effective immediately
  • Here, calls made with AI-generated voices are considered “artificial” under the Telephone Consumer Protection Act (TCPA)
  • Consequently, calls employing AI-generated voices fall under the TCPA and therefore require prior express consent from the called party except in cases of emergency or exemption
  • Hefty fines in excess of $23,000 per call can be issued for non-compliance and call recipients are given the right to pursue legal action, which could see them receive compensation of up to $1,500 for each unwanted call

9. USPTO releases Guidance for AI Assisted Inventions 

  • Following Biden’s executive order on AI in October 2023, the US United States Patent and Trademark Office issued guidance on 12 February on patenting AI-assisted inventions, stating that the use of AI systems does not negate human inventorship
  • Instead, a natural person using an AI system is an inventor if the natural person makes a “significant contribution” to the invention, applying the standard for joint inventorship, and recognizes and appreciates the invention
  • However, each patent claim must have a human inventor, meaning that inventions created with AI alone cannot be patented if there is not a substantial contribution from a human

10. Judge dismisses claims in the Tremblay v. OpenAI case

  • On 12 February, authors claiming copyright infringement against OpenAI for using their works to train the GPT model had their claims of Vicarious Copyright Infringement, DMCA Violations, and Negligence, Unjust Enrichment Violations dismissed due to a lack of substantial evidence
  • The only remaining claim allowed by the Court was that Defendants used the Plaintiffs copyrighted works to train ChatGPT without their authorization for a commercial purpose
  • The authors have been granted the opportunity to amend their claims by 13 March 2024 

11. Mobley v. Workday - Amended Complaint filed

  •  On 20 February 2024, Derek Mobley filed an amended complaint in the ongoing lawsuit against Workday following a dismissal due to insufficient evidence
  • The plaintiff accuses Workday's algorithm-based applicant screening tools of discriminating based on race, age, and disability, specifically against African American, disabled, and applicants over 40
  • The lawsuit alleges intentional employment discrimination, disparate impact discrimination, age discrimination, violation of 42 U.S.C. § 1981, and aiding and abetting discrimination
  • Mobley’s amended complaint contends that Workday acts as an agent for client-employers, controlling access to job opportunities through its AI tools.

12. California Introduces Safe and Secure Innovation for Frontier Artificial Intelligence

  • On 7 February 2024, Senator Scott Weiner, introduced SB1047 in the California State Legislature.
  • SB1047 defines "covered AI models" subjected to its regulations based on specific criteria, including the level of computing power used in training or similarity to state-of-the-art models.
  • Developers must conduct safety assessments before training covered AI models, ensuring they do not pose risks like enabling mass casualties or causing significant infrastructure damage.
  • Developers are mandated to establish written safety protocols for third-party testing, ensuring independent verification of the safety of their AI models.
  • The bill requires developers to implement shutdown capabilities for AI models lacking positive safety determinations. They must also submit annual certifications confirming compliance with the Act's requirements.

13. Connecticut introduces an Act on Artificial Intelligence

  • Senator Martin Looney introduced SB. No. 2 on 7 February 2024 to implement comprehensive regulations governing the development, deployment, and use of specific artificial intelligence systems.
  • The Bill aims to create an Artificial Intelligence Advisory Council, while also prohibiting the dissemination of certain synthetic images and deceptive media related to elections.
  • The proposed legislation includes measures mandating state agencies to explore generative AI applications, ensuring training by the Commissioner of Administrative Services, integrating AI training into workforce programs, and establishing educational initiatives like the "Connecticut Citizens AI Academy" and related certificate programs.

14. US Congress launches  a Bipartisan Task Force on AI

  • On 20 February, Speaker Mike Johnson and Democratic Leader Hakeem Jeffries announced the creation of a bipartisan Task Force on Artificial Intelligence (AI) in Congress.
  • The task force aims to explore ways to ensure the U.S. maintains a leadership role in AI innovation while addressing potential threats.
  • The task force will consist of twelve members from both parties, representing key committees, and will produce a comprehensive report with guiding principles and bipartisan policy proposals
  • There will be a focus on encouraging innovation, safeguarding national security, and establishing necessary guardrails for the development of safe and trustworthy AI technology.

15. NIST announces the US AI Safety Consortium; Issues a Request for Information (RFI) on its mandate pursuant to the Biden-Harris Executive Order

  • The US National Institute of Standards and Technology (NIST) announced at the start of February the inaugural members of the AI Safety Consortium (AISIC) – a multistakeholder of over 200 entities that will work with the NIST to collaboratively develop guidelines and best practices to further trustworthy AI.
  • Additionally, NIST sought stakeholder input on its mandate established by the Biden-Harris Executive Order on AI, with the comment period closed as of 2 February.

Global

16. US and China agree to cooperate on responsible AI development

  • On 24 February, it was announced that the US and China have agreed to discuss responsible AI development due to shared concerns, including AI's potential to disrupt democracy, cause job displacement, and raise risks in military applications
  • China has shown willingness to engage in international collaboration on AI safety and governance, with bilateral talks expected to cover military applications, data privacy, and ethical AI use
  • The talks are supposedly set to happen in the spring, when success will depend on finding common objectives and fostering genuine cooperation to mitigate AI risks and ensure societal benefits 

17.  ASEAN publishes guidelines on AI ethics

  • On 2 February 2024, the Association of Southeast Asian Nations (ASEAN) has released a guide on AI governance and ethics to empower organizations and governments in Southeast Asia to navigate the landscape of artificial intelligence responsibly
  • The guide outlines three fundamental objectives towards responsible AI development, including practical guidance on designing, developing, and deploying AI systems, promoting consistency and collaboration in AI governance efforts between member states, and empowering policymakers to shape regulatory frameworks that promote responsible AI practices
  • It also recommends the Establishment of an ASEAN Working Group on AI Governance, nurturing AI talent, promoting investment in AI startups, investing in AI research and development, promoting adoption of AI governance tools, and raising awareness among citizens
  • The guide emphasizes the need for collaborative efforts between governments, private sectors, and educational institutions to foster ethical AI development and adoption within the region

18. Korea announces its Personal Information Protection Committee Major Policy Implementation Plan

  • On 16 February, South Korea announced a major policy implementation plan aimed at safeguarding personal information, strengthening digital rights, fostering a data-driven economy, and leading global standards for personal information protection.
  • The plan establishes a comprehensive personal information protection system to address emerging risks and ensure safety in daily life, where privacy guidelines are customised to AI learning
  • To support the data economy, the plan outlines the development of  standards ed for the reasonable use of video information, and synthetic data development is promoted 

19. Hong Kong company loses more than $25 million after deep fake scam

  • On 2 February, Hong Kong police reported that a Hong Kong branch of a multinational company lost $25.6 million (HK$200 million) after a deep fake scam
  • The employee fell victim to malicious instructions from a deep fake posing at the company’s CFO
  • This was the first case of its kind in Hong Kong, with all attendees but the victim actually being deepfakes

Holistic AI Policy Updates

We recently hosted our monthly Policy Hour Webinar with the National Institute of Standards and Technology’s (NIST) Martin Stanley on the NIST AI RMF and the Biden-Harris AI Executive Order. Give it a watch here.

We’re also proud to have submitted feedback in response to the NIST RFI on the executive order and are super pleased to have joined NIST’s AI Safety Consortium as inaugural members – we’re looking forward to working with other members to advance trustworthy AI!

Want to dive in?

Check out our blog for deeper insights on key AI developments around the world from our policy team.

Authored by Holistic AI’s Policy Team.




To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics