Key AI Developments in November 2024

Key AI Developments in November 2024

Keeping up with developments in AI legislation, regulation, investigations, penalties, legal action, and incidents is no easy task, which is why we have created Tracked, a bite size version of our global AI Tracker. Each month, Holistic AI ’s policy team provides you with a roundup of the key responsible AI developments from the past month around the world to keep you up to date with the ever-evolving landscape.

Create a free account or login to the Tracker to stay on top of AI Governance developments in real-time.


Europe

1. First draft of the EU general-purpose AI Code of Practice published

  • On 14 November 2024, a draft of the Code of Practice for providers of general-purpose AI models with systemic risks was introduced, developed through collaboration across four Working Groups focused on transparency, risk identification, technical mitigation, and governance.
  • The Code aligns with the EU AI Act, which came into force on 1 August 2024, with the final version of the Code expected by 1 May 2025. The goal is to ensure the responsible deployment of AI models while adhering to EU principles and international standards.
  • The draft is based on six guiding principles: alignment with Union values, proportionality to risks and provider capacities, support for AI safety, and future-proofing measures to adapt to upcoming AI developments.
  • The Code identifies a range of systemic risks such as cybersecurity threats, loss of control over powerful AI systems, and the potential for large-scale manipulation, including disinformation and election interference. Providers are required to continuously assess and mitigate these risks throughout the lifecycle of AI models, ensuring that risk management strategies are proportional to the severity of identified threats.
  • Copyright compliance is a central focus of the draft Code. Providers must implement clear copyright policies in line with EU laws and conduct due diligence on data sources used in AI training.
  • Additionally, the Code introduces an Acceptable Use Policy (AUP), which outlines the conditions under which AI models can be used, including security and privacy measures, as well as the processes for monitoring and enforcing compliance. Providers are also encouraged to make their copyright and usage policies transparent and accessible to the public.


2. EU adopts Implementing Regulation for transparency reporting under the Digital Services Act

  • On 4 November 2024, the European Commission adopted an Implementing Regulation that standardizes transparency reporting for providers under the Digital Services Act (DSA). The regulation harmonizes the format, content, and reporting periods for transparency reports, aiming to improve consistency and comparability in content moderation practices.
  • Providers of intermediary services, hosting services, online platforms, very large online platforms (VLOPs), and very large online search engines (VLOSEs) are required to follow two standardized templates outlined in the regulation: a Quantitative Template for machine-readable data on content moderation and a Qualitative Template for descriptive information on moderation practices.
  • The reporting periods for these providers vary: VLOPs and VLOSEs must report biannually (January-June and July-December), while other providers must report annually (January-December). Reports must be made publicly available within two months after each reporting period.
  • Transition periods are in place to align reporting timelines across all providers, with full implementation of the templates beginning on 1 July 2025. The first fully harmonized reporting cycle will begin in 2026, ensuring uniformity in the transparency of content moderation practices across the EU.


3. European Patent Institute (EPI) publishes guidelines on the use of generative AI in patent attorneys' work

  • On 16 November 2024, the EPI released guidelines emphasizing that members must adhere to high standards of probity, confidentiality, and client interests when using generative AI models in patent work, and they should be cautious of infringing on third-party intellectual property rights.
  • Members are required to understand the general characteristics of the AI models they use, particularly regarding confidentiality and the possibility of errors (e.g., "hallucinations") in AI-generated content. They must ensure that any content shared with AI models is confidential, especially when dealing with sensitive client data.
  • While using generative AI, members retain full responsibility for the quality of their work. They must thoroughly check AI-generated content for errors and ensure that it meets professional standards before presenting it to clients or authorities.
  • Members must clarify with their clients in advance whether they consent to the use of generative AI in their cases and must comply with all relevant legal, ethical, and reporting requirements. Furthermore, when determining fees for AI-generated work, they should ensure charges are fair and reflect the level of difficulty and the risks involved.

 

4. GEMA sues OpenAI for copyright infringement over use of song lyrics in AI training

  • On 13 November 2024, GEMA, the German collecting society for composers and music publishers, filed a lawsuit against OpenAI and OpenAI Ireland Ltd., accusing them of unlicensed use of song lyrics to train the ChatGPT model.
  • The lawsuit claims that ChatGPT reproduces song lyrics when prompted, despite other internet services paying licensing fees for using authors' texts, which GEMA argues OpenAI deliberately circumvents by not compensating rights holders.
  • This lawsuit is the first of its kind globally, initiated by a major rights organization to challenge the use of copyrighted material in generative AI, and aims to clarify legal questions about AI training and copyright infringement.
  • At the heart of GEMA’s case is the argument that OpenAI violated a declared opt-out for its members, granted during GEMA’s 2022 general meeting, and disputes the applicability of the text and data mining exception under German and EU copyright laws.

 

5. Bank of England and FCA Publish 2024 AI and Machine Learning Survey results

  • On 21 November 2024, the Bank of England and the Financial Conduct Authority (FCA) published the Artificial Intelligence and Machine Learning Survey 2024, highlighting the growing adoption of AI in financial services. 75% of firms reported using AI, with an additional 10% planning to implement AI within the next three years, marking a significant increase from previous surveys.
  • The survey emphasizes the importance of accountability and data governance in AI usage. Firms are encouraged to designate responsible teams or individuals for AI systems, ensuring strong data management practices, particularly in areas such as privacy, security, ethics, and bias. 84% of firms reported having accountable persons overseeing AI frameworks.
  • Firms are urged to improve their understanding of AI technologies, particularly third-party models, with 46% reporting only a partial understanding of the AI systems they employ. Proper assessments of AI model complexity, accuracy, and operational efficiency are recommended to manage risks effectively.
  • The survey also reveals that data-related risks, such as privacy, quality, and security, are top concerns for firms using AI. However, the perceived benefits of AI are growing, especially in areas like data analytics, anti-money laundering, fraud prevention, and cybersecurity, with expected increases in operational efficiency and productivity over the next three years.


6. UK's AI Assurance and Responsible Management Initiatives:  Report and Tool

  • On 6th November 2024, the UK Department for Science, Innovation, and Technology (DSIT) released the "Assuring a Responsible Future for AI" report, outlining key action plans to support the responsible growth of the AI assurance market in the UK.
  • As part of the report, DSIT introduced an AI Assurance Platform to serve as a central hub for AI assurance tools and resources. The platform will include the AI Essentials Toolkit, designed to help businesses, particularly SMEs, engage with AI assurance and implement best practices.
  • The report outlines a plan to collaborate with industry experts in creating a "Roadmap to trusted third-party AI assurance," which will foster the growth of a reliable market for independent AI assurance providers, increasing trust in AI technologies.
  • On the same day, DSIT unveiled the AI Management Essentials (AIME) tool—a self-assessment resource to help businesses establish responsible AI management practices. AIME is based on international frameworks and offers practical recommendations through a self-assessment questionnaire focused on internal processes, risk management, and stakeholder communication.

 

7. UK Releases Model for Responsible Innovation to Guide Ethical AI Development

  • On 14 November 2024, the Model for Responsible Innovation was released by DSIT’s Responsible Technology Adoption Unit (RTA) to help both public and private sector teams develop and deploy AI and data-driven technologies responsibly, ensuring ethical practices and fostering trust in AI systems.
  • The Model provides a practical framework for identifying and addressing risks associated with AI development, guiding organizations through principles like fairness, transparency, accountability, and societal well-being to ensure their systems have a positive social impact and align with ethical standards.
  • It outlines eight key Fundamentals, including transparency, fairness, and privacy, to help teams build trustworthy AI systems that protect individual rights and deliver benefits to society while ensuring the systems are secure, reliable, and human-centered.
  • The Model also highlights six essential Conditions, such as meaningful stakeholder engagement, effective governance, and robust technical design, that must be in place to support the Fundamentals, helping organizations manage risks and comply with legal and ethical requirements throughout the lifecycle of their AI projects.

US

8. Raw Story Media, Inc. and AlterNet Media, Inc. v. OpenAI: Court dismisses DMCA claim but allows Amended Complaint

  • On 7 November 2024, the US District Court, Southern District of New York ruled in favor of OpenAI, dismissing a lawsuit filed by Raw Story Media and AlterNet Media, which alleged violations of the Digital Millennium Copyright Act (DMCA) and sought monetary and injunctive relief for the removal of copyright management information (CMI) from OpenAI’s training sets.
  • The Plaintiffs claimed that OpenAI’s training datasets removed crucial CMI, such as author names and titles, violating DMCA Section 1202(b). However, the court clarified that DMCA Section 1202 does not give copyright owners control over the future versions of their works and that OpenAI could reproduce or create derivative works without liability if CMI was intact.
  • The court found that the Plaintiffs did not show actual injury from the alleged DMCA violation, particularly in terms of monetary damages or the risk of future harm, as the likelihood of ChatGPT generating plagiarized content from their articles was minimal.
  • While the case was dismissed, the court allowed the Plaintiffs to amend and refile their complaint, recognizing that their main grievance was OpenAI’s use of their works without compensation, a matter not addressed by Section 1202(b) of the DMCA.


9. Department of Homeland Security releases framework on AI roles and responsibilities in critical infrastructure

  • On 14 November 2024, the Department of Homeland Security (DHS) introduced the "Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure", designed to promote the safe and responsible deployment of AI across U.S. critical sectors.
  • Developed through extensive collaboration with stakeholders across the AI supply chain, the Framework includes input from cloud providers, AI developers, critical infrastructure operators, civil society, and public sector entities.
  • The Framework sets out voluntary responsibilities for securing data, ensuring robust AI model design, and fostering human oversight in critical infrastructure applications, while focusing on responsible deployment and security measures.
  • Emphasizing risk mitigation and continuous monitoring, the Framework calls for enhanced transparency, accountability, and ongoing research to address emerging AI risks and safeguard national infrastructure.


10. Eliminating Bias in Algorithmic Systems (BIAS) Act: US federal legislation introduced

  • The Eliminating Bias in Algorithmic Systems (BIAS) Act, introduced on 1 November 2024, mandates that each federal agency using, funding, or overseeing AI establish a Civil Rights Office to identify, prevent, and address algorithmic bias and discrimination.
  • These Civil Rights Offices must submit biennial reports to Congress, detailing the risks posed by algorithmic systems, the actions taken to mitigate these risks, and recommendations for further legislative or administrative measures.
  • The Act establishes an interagency working group, led by the Department of Justice, to coordinate efforts and share best practices across federal agencies to protect civil rights in AI systems.
  • The BIAS Act aims to address algorithmic bias in critical sectors like healthcare, finance, and law enforcement, where AI systems have disproportionately harmed marginalized communities, such as through discriminatory facial recognition or risk assessment tools.


11. NIST AI 100-4 Report on Reducing Risks from Synthetic Content Released

  • On 20th November, NIST released a report evaluating existing and emerging standards, tools, and practices to manage risks from AI-generated content, focusing on authentication, detection, labeling, and mitigation of harms.
  • The report examines digital transparency techniques, such as provenance data tracking using watermarking and metadata, to verify the origins and modifications of content and ensure its authenticity and integrity.
  • It addresses challenges in preventing AI misuse, including the creation of child sexual abuse material (CSAM) and non-consensual imagery, while identifying gaps in current approaches and limitations in tool robustness and effectiveness.
  • The report contextualizes technical solutions within the AI lifecycle, reviewing the impacts of synthetic content based on creators, dissemination methods, and societal costs, while emphasizing that harms vary widely and require comprehensive mitigation strategies.


12. X Sues California to block Assembly Bill 2655 on Deepfake Election Content

  • On 14 November 2024, X (formerly Twitter) filed a lawsuit against California to block the "Defending Democracy from Deepfake Deception Act" (AB 2655), set to take effect on 1 January 2025. The law requires platforms to label or remove AI-generated deceptive election content known as deepfakes.
  • X argues the law violates First Amendment protections, claiming it could lead to over-censorship by encouraging platforms to remove legitimate election content out of caution to avoid penalties.
  • The lawsuit highlights that X already has policies addressing manipulated media while allowing exceptions for satire, memes, and commentary. It asserts these policies strike a balance between protecting users and safeguarding free expression.
  • Despite the law's exemption for parody and satire, X contends that determining intent is challenging, increasing the risk of misjudgment and limiting political discourse ahead of elections.


13. College student threatened by AI chatbot

  • A Michigan college student this month was told to die by Google’s Gemini chatbot while seeking support with his homework.
  • Despite having safety filters that prevent harmful outputs, the chatbot’s response also told the student he was a “waste of time and resources” and a “burden on society”.
  • In its statement to CBS News, Google stated that large language models can result in non-sensical outputs even with built-in safeguards and policies.
  • This is one of hundreds of generative AI safety incidents that have occurred in recent years.


14. US Introduces TRAIN Act (Transparency and Responsibility for Artificial Intelligence Networks Act) to Protect Copyrights in AI Training

  • On 25th November 2024, the Transparency and Responsibility for Artificial Intelligence Networks (TRAIN) Act was introduced to help creators, including musicians, artists, and writers, access the courts to protect their copyrighted works used in training generative AI models without consent or compensation.
  • The TRAIN Act allows copyright holders to access training records used for AI models to determine if their works were used, similar to the process used in internet piracy cases. This aims to address the lack of transparency in AI model training.
  • The bill seeks to solve the "black box" issue by providing creators with the ability to know when and how their works are being used to train AI models, a process currently not required by law for AI companies to disclose.
  • The TRAIN Act grants copyright holders the ability to issue subpoenas to AI model developers or deployers, requiring them to disclose training records. If they fail to comply, it creates a rebuttable presumption that the copyrighted work was used in the training process.

Global

15. Launch of the International Network of AI Safety Institutes (INASI) and key developments

  • On 20 November 2024, the U.S. Department of Commerce and the U.S. Department of State launched the International Network of AI Safety Institutes (INASI) to promote global cooperation on AI safety.
  • The initiative focuses on addressing AI risks while fostering innovation, with a mission to advance scientific understanding of AI safety and develop best practices for testing and evaluation.
  • The inaugural event in San Fransico brought together government, industry, academic, and civil society representatives to lay the foundation for international collaboration in AI safety.
  • Over $11 million in funding was committed to research on mitigating risks from synthetic content, including contributions from the U.S., Australia, and the Republic of Korea.
  • INASI conducted its first joint testing of Meta’s Llama 3.1 model, providing insights into AI safety testing and informing future evaluations. The Network also proposed a six-pillar framework for AI risk assessments to align global safety practices.
  • The U.S. introduced the Testing Risks of AI for National Security (TRAINS) Taskforce to address national security risks posed by advanced AI systems. The Taskforce, involving agencies like the Department of Defense and NSA, will focus on coordinating research and testing across critical areas such as cybersecurity and infrastructure.


16. Asian News International sues OpenAI for alleged unauthorized use of copyrighted content to train ChatGPT in India

  • On 18 November 2024, Asian News International (ANI) filed a lawsuit against OpenAI in the Delhi High Court, accusing the company of using its copyrighted content to train ChatGPT without proper licensing or permission.
  • OpenAI faces its first copyright lawsuit in India. ANI alleges that OpenAI scraped its news content, reproduced it verbatim or in a substantially similar form, falsely attributed responses to ANI, and failed to block unauthorized access, damaging its reputation and risking misinformation.
  • OpenAI responded by emphasizing its use of publicly accessible data and adherence to fair use principles, while also highlighting ongoing collaborations with news organizations worldwide, including in India.
  • The next hearing is set for 28 January 2025, with the judge directing OpenAI to provide further clarification on the accusations.
  • ANI’s lawsuit follows similar actions by other news organizations, such as The New York Times and The Chicago Tribune, which have also taken legal action against OpenAI for similar reasons.


17. Spain Proposes Draft Royal Decree for Collective Licenses in AI Training

  • On 19 November 2024, the submission period for public feedback on Spain’s proposed Royal Decree began. The decree aims to regulate the use of copyrighted works for training AI models, particularly for general-purpose AI (GPAI) development.
  • The draft decree introduces extended collective licenses, enabling collective management entities to grant non-exclusive authorizations on behalf of rights holders, simplifying the process of obtaining permissions for AI use.
  • It ensures equal treatment for all rights holders, allowing them to exclude their works from the collective license at any time, with proper public notification before use.
  • Based on the EU Copyright Directive (2019/790), this draft could set a precedent for other EU countries to follow, influencing AI copyright practices across the region.
  • Public Feedback is open until 10 December 2024.

 

18. Singapore’s Minister for Manpower clarifies the applicability of existing laws to automated employment decision tools

  • On 13 November, Patrick Tay Teck Guan, Singapore’s Minister for Manpower, provided an oral answer to a Parliamentary Question on the regulation of automated employment decision tools (AEDT).
  • Guan clarified that regardless of the technology they use to make employment decisions, employers must comply with the Tripartite Guidelines on Fair Employment Practices, which govern non-discrimination in employment practices.
  • Suspected cases of AI-driven discrimination can be referred to the Tripartite Alliance for Fair and Progressive Employment Practices (TAFEP), which will work with the employer to address the grievances of the candidates or employees.
  • Guan also confirmed that the TAFEP had not yet received any such complaints.


19. Legal Disputes Over AI Usage in Academia: India vs. US

  • Kaustubh Anil Shakkarwar, a Master of Laws student and practicing advocate, sued OP Jindal Global University (JGU) after being penalized for allegedly using AI in his exam. The university claimed 88% of his answers in the "Law and Justice in the Globalizing World" course were AI-generated.
  • The Petitioner failed to disclose to the Court a 13 October 2024 email from the university’s Registrar outlining the accommodation made for him. The matter was already internally resolved as the petitioner’s internal assessment marks were restored, and his final grade was updated to an A+.
  • After reviewing the email and an unofficial transcript, the court confirmed that the university had addressed the petitioner’s primary concern and dismissed the lawsuit. The petitioner’s counsel later acknowledged that the issue had become moot.
  • In contrast, this month, a judge in Massachusetts ruled in favor of a high school that gave a student a low grade for using AI to complete an assignment.
  • The lawsuit filed by the student’s parents claimed that the school’s policies did not explicitly prohibit the use of AI, but the court ruled that the disciplinary action taken by the school was not disproportionate to the violation.

 

20. India Releases Draft for Robustness Assessment and Rating of AI Systems in Telecom and Digital Infrastructure

  • On 25 November 2024, the Telecom Engineering Centre (TEC) released a voluntary draft standard for assessing and rating the robustness of AI systems in telecom networks and digital infrastructure, with public feedback invited until 15 December 2024.
  • The draft focuses on evaluating AI robustness in critical applications, addressing metrics like resilience to data shifts, integrity, reliability, explainability, transparency, privacy, and security while proposing a framework to identify vulnerabilities and mitigation strategies.
  • A three-tier rating system—high, medium, and low risk—is suggested to quantify and benchmark AI systems’ robustness, enhancing trust and safety for telecom operators, developers, and policymakers.
  • This draft builds on TEC's earlier "Fairness Assessment and Rating of Artificial Intelligence Systems" (July 2023) and adopts a risk-based approach similar to the EU AI Act to ensure AI systems' reliability and security in critical applications.


Holistic AI updates

This month, we were thrilled to launch the Holistic AI Tracker 2.0, a free, interactive knowledge hub for tracking AI governance developments around the world.

New for 2.0 was the Atlas, a heat-mapped worldview that covers AI legislation, regulation, legal action, standards, guidance, incidents, investigations, and penalties globally.

This tool is for you whether you work in product, compliance, legal, policy, procurement, or data privacy or you’re just curious about the state of play. Create a free account or login to the Tracker to stay on top of AI Governance developments in real-time.  


We also enjoyed attending the Web Summit in Lisbon, where our Co-Founder Adriano Soares Koshiyama joined Ryan Browne , Shelley McKinley , and Tasos Stampelos to discuss Navigating the EU AI Act.


We also published our paper on AI and perceived existential risk open access in AI and Ethics. Co-authored by Airlie Hilliard Emre Kazim and Stephan Ledain , we discuss the positive and negative ways that AI taking over can be conceptualized, factors driving thoughts about the existential risk of AI, and ways to reduce threat perceptions. We also explore the misconception of AI as robots and disentangle the technologies. Give it a read here.


Finally, we're really proud to share that Holistic AI was also named a Cool Vendor for AI Security by Gartner!



Ilona Stezowska

QP, RP, RPi, WQP (BSc Microbiology, CBiol, MRSB, PCQI)

3w

Thank you for this detailed and informative article. It's a good source of reference for the most recent updates in the space of AI Governance. 👍

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics