Artificial Intelligence and the case for legislation
“The end of law is not to abolish or restrain, but to preserve and enlarge freedom. For in all the states of created beings capable of law, where there is no law, there is no freedom.”
John Locke, circa mid-17th century
AI legislation is growing at a dizzying pace. The number of countries with laws containing the term “AI” zoomed from 1 in 2016 to 37 in 2022 to 127 in 2023. Figures for 2024 are not available at the time of writing, but this figure is expected to go to 175 by the time the year closes. So, I thought it would be interesting to look at what countries are doing and why, and what lies ahead.
Some reasons for the spike: AI, and its logical descendant Generative AI, (note that a lot of this legislation happened before Generative AI ‘took off’ in end 2022) has not just becoming increasingly important for nations to maintain competitive advantage, but has also become increasingly opaque, non-deterministic, reliant on free /public data and vulnerable to malicious actors.
Legislation by itself can be of several types. But nations aim for a happy medium between these types. And of course, legislation is not to be confused with national policy (‘we will strive to/encourage’) or guidelines (‘please use these for best results..’.). Of course, there are linkages. Good legislation needs good policy and good guidelines.
Legislation by nature is driven by principles and ethics. The strongest influences across national legislations are the UNESCO Recommendations, the OECD Principles, The UN SDGs and the Bletchley Declaration (how nice that Turing’s workplace finds a mention). The EU also mentions sustainability and the Geneva declaration as an influence. In general, legislation aims to protect the following rights of citizens: the right to privacy, the right to fairness, the right to know (both that an AI system is being used, as also an explanation of how), the right to redressal, the right to own property (in this case, IP) and its benefits, the right to equality (being bias-free), the right to critical public services, the right to true information (think ‘deepfakes’) and the right to understand (transparency).
But citizens’ rights are not the only consideration. Nations’ and governments’ interests are also important. So, we add the right to independence, the right to economic growth, the right to protection of borders (safety of the state and its citizens), and the right to retaliate. These rights then manifest themselves as obligations of enterprises and individuals along the value chain. Depending on which country you are in, these can be some or all of the following: model registration, data rights (copyright free), transparency of AI systems and their data, bias free decisions, accountability, prevention of harm to citizen and state, protection of privacy, auditability, sustainability, record keeping, cybersecurity, robustness, transparency of output (labelling deepfakes), risk management plans, AI awareness (for both provider and consumer); I should add that there are also required structures and office holders in some countries. A nation’s policy and the inherent AI guidelines go some way in fulfilling these obligations, but at one level, enterprises have to do a lot of work to do to be safe enough.
***
I have a clear intent to make this article neither exhaustive nor exhausting. I have decided to talk about the interesting features of legislation in seven ‘nations. The nations were chosen on the basis of GDP, degree of technology advancement, investment in AI in 2022-2024 and their current place in the global AI ecosystem. Also, my focus will be more on articles that necessitate obligations, rather than those that establish structures or processes or budgets (e.g. for AI literacy).
The EU: The EU AI Act is probably the most discussed AI legislation of late. AI systems are classified on the basis of risk of the use case, and each risk level carries a set of obligations – indeed, the highest level of risk has no obligations except an outright ban (Article 5). Next level is high risk – anything that threatens health, safety or fundamental rights of citizens. The next two levels have obligations too – for example, AI generated text needs to declare itself as such. In arguing that the obligations and penalties are proportional to the offence, the EU AI Act is echoing Beccaria’s doctrine of legislation. But risk is not the only factor governing obligations. General Purpose AI systems (GPAI) – when the underlying model passes a threshold that gives it emergent thinking and advanced decision-making capabilities - are also subject to legal obligations (Article 101). The rule currently is that any system based on more than 10^25 FLOPs is a GPAI. Also, actors in the value chain who are outside the EU are also subject to the Act (for example, if an output generated in the EU but from a model in the USA or China was found to be harmful, then that enterprise will be subject to penalty). There are also obligations for large platforms (Recital 118) – for example if an AI system was used within a large usage search engine. The EU Act is also evolving – notice the recent AI Liability Rule. There is also the DPF, that will give US companies some breathing space. The EU legislation is markedly relaxed for SMEs, for research initiatives (see this article) and for personal use. There is already evidence that AI will not be considered an inventor or co-inventor in some parts of the world. The Act is also finding echoes in other nations. I would recommend reading the following articles in the EU AI Act: 2,3,5,6,9,16, 23, 25, 27, 53, 56, 99, 100 and 101,
The USA: Unlike the EU, the USA prefers a bottom-up (sometimes called a patchwork quilt) approach to legislation. Strictly speaking, Federal laws can be imposed through various bodies such as the FTC. For now, it has been left to the states. But only 70 of the 488 bills in the states have been enacted, a notable failure being California’s SB 1047. The Colorado AI Act and the two California AI Acts are three notable Acts that went through recently. Almost half of states have consumer privacy laws that regulate automated decision-making (ADM). A few state (and local) statutes regulate particular AI applications (e.g., Illinois’s law on using AI to evaluate video interviews in hiring). California, New York, and Florida lead the United States in AI regulation, each introducing significant legislation that could serve as models for other states. In California, the focus is on generative AI, autonomous vehicles, and public contracts for AI services. Florida’s proposals centre on AI transparency across various applications, including education, public AI use, social media, and autonomous vehicles. States to watch: California, Colorado, Rhode Island, Illinois, Connecticut, Virginia, and Vermont
A Federal law of interest is the Algorithmic Accountability Act, which is expected to go into effect in 2026. Also of interest are the AI Training Act and the AI Bill of Rights.
An interesting article about the forces shaping US legislation can be found here.
China: Key laws include
1. The Algorithmic Recommendation Management Provision that bans AI generated news, bans discriminatory data tags in recommendation systems and takes cognizance of the needs of the elderly, as also protects workers’ rights.
2. Measures for the Management of Generative AI Services, that requires organizations to acquire training data that is copyright free, label AI generated content as such, respect the image, reputation and privacy of individuals who may be affected, and uphold core socialist ideals of the PRC (those include safeguarding the socialist system, preserving national unity and social stability and preventing undesirable acts such as spread of ethnic hatred, terrorism and violence, obscenity and fake information)
Above all, the AI Law of the People’s Republic of China (will come into effect in 2025) that requires developers and users to adhere to principles of transparency of intent and working of AI systems, and to inculcate safety and reliability. An interesting article is Article 23, that says “The State establishes and improves rules protecting the IPR of training data, algorithms, and AI-generated content.”, a marked deviation from the stance taken by Western nations.
Some interesting features: The State encourages and supports AI developers and providers in taking out suitable insurance products on AI products and services. AI can be used only as reference in the judiciary and healthcare sectors (not as a decision maker)). News AI, Social Bots, Biometric Recognition, Autonomous Driving are banned use cases. IP can be granted to AI created content, a marked deviation from the stance of Western nations. Content may be used to train AI, if it does not conflict with the revenue generation potential of that content.
The United Kingdom: The UK does not have AI specific legislation in place yet but relies on sectoral legislation. This includes The Equality Act, The Consumer Protection Act, Financial Services and Markets Act, Consumer Rights Act, the National Security and Investment Act, the UK General Data Protection Act, and the Copyright Designs Patents Act. The UK is taking a soft stance as opposed to the EU; it aims to specify and promote safeguards by way of auditing frameworks (this document is strongly recommended reading!); penalties are as by applicable legislation named above. There are also government mandated guides for understanding AI ethics and safety, for using AI in the public sector and for having algorithmic transparency. Last but not the least, there is the guide on explaining decisions made by AI, from the ICO and the Alan Turing Institute. In general, the UK’s approach is prescriptive on what organizations should do to avoid damages.
Recommended by LinkedIn
South Korea: The Basic AI Bill, 2024. The South Korea Basic AI Bill defines high impact use cases for AI, such as healthcare, public safety and biometric analysis and requires actors (developers, deployers) to conduct risk assessments and to implement mitigation measures (Article 32). In addition, Article 32 encourages influence assessments to evaluate impact on fundamental rights. Article 31 requires AI generated content to be clearly labelled or watermarked. Unlike the EU, South Korea not only has fines, but also imprisonment up to three years in the penalty specification. The bill now awaits clearance from the Judicial Committee and is expected to go into effect from 2025.
Other legislation that is being augmented/created to regulate AI systems, includes the Fair Hiring Procedure Act (inform people while using AI in hiring), the creation of the PIPC (PIPC can request audit trails from provider if personal information is leaked by an AI system); the Content Industry Promotion Act (mandatory disclosure that content was generated using AI technology); Public Official Election Act (prohibits AI systems from commenting on elections or election results and prohibits AI systems from manipulating polls by outputting false information); Artificial Intelligence Liability Act (2023) – defines responsibilities for AI developers and users and creating rights for AI users; and the Copyright Act (defines scope of permissible use of content for training AI models). Each of these is in process and should go thru by 2025.
Brazil: Bill 2338/2023. This bill is in line with the EU AI act and is expected to go through in 2025. In order to establish the risk of an AI system, the bill requires that systems undergo a preliminary assessment by the supplier before they are placed on the market or deployed, with the assessment documented and registered to ensure accountability and liability. Where the assessment identifies that a system is associated with a high level of risk, an algorithmic impact assessment will be required, in addition to governance measures. Such systems must implement transparency and data management measures, follow data protection legislation, procedures for training, testing, and validating system results, and information security measures. They must also comply with specific governance mechanisms for high-risk systems, such as documentation, automatic recording of events, tests for reliability and robustness, and data management to prevent bias.
Taiwan: AI Basic Act, 2024. The draft stresses the importance of protecting intellectual property in the use of training data for AI development. Taiwan's Intellectual Property Office stated a position on AI training data similar to that of its U.S. counterpart around one year ago. The position states that using others' copyrightable work for training AI infringes upon the copyright of the owners unless it is otherwise licensed by the owners or meets the fair use test.
Other mentions: Japan (the AI Legal Study Framework Group is aiming to create legislation by 2025, and where the role of AI systems in detecting natural disasters is of great concern), Singapore (which has a marked preference for legislation for AI as applies to financial markets), UAE (which is not aiming for hard legislation, but the policy talks a lot about encouraging investment in UAE and talent development), India (the Digital India Act, the Digital Data Protection Act, both on the way), Australia (The Voluntary AI Ethics Principles followed by The Mandatory Guardrails for AI in High Risk Scenarios and the Voluntary AI Safety Standards), Russia ( Digital Innovation and AI in Experimental Legal Regimes), Canada (Bill C-27 -the AI and Data Act) and Indonesia (Indonesian National Strategy on Artificial Intelligence, but no legislation yet).
***
There seems be legislation along a continuum: at one end you have the hardliners, like the EU, China and Brazil, closely followed by the USA and Taiwan. These countries come with mandatory procedures, clear classifications and penalties for AI systems; penalties that are deterring, some would even say hostile. At the other end, you have nations like Saudi Arabia, UAE, Japan, Singapore, Indonesia and India that seem to have a largely hands-off approach, presumably to encourage talent development and foreign investment. In the middle, are nations like UK, Australia, South Korea and Taiwan that seem to be finding their way to tougher legislation. The approach here seems to be “we suggest you follow these guidelines and perform these procedures, and your enterprise should be safe” (penalties of course follow from existing sectoral laws).
What lies ahead? In the next ten years, the following will be increasingly in focus: cross border agreements on legislations of AI systems, legislation to check monopolistic powers of the ‘Big Six’ (AMMANO), debates on the fairness of penalties and the nature of risk created by AI systems, as also debates on the use of AI in warfare, contracts and criminal justice.
Let’s wait and watch. The jury is still out.
***
Cover Image: Tablet from the Code of Law by Hammurabi, among the earliest known forms of written legislation, circa 1750 CE
Disclaimers and acknowledgments:
This article should not be construed as legal advice, but as a basis for further discussions and debate on the dimensions of legislation and the urgency thereof.
The opinions here are personal and not necessarily those of any organization.
Any errors or omissions are purely due to oversight and may kindly be condoned. Feel free to reach out with comments and corrections.
No generative AI was used in the creation of this article, though some Google research was used, as also some texts on law and artificial intelligence.
Investor, Founder - Services Startup
21hThank you for putting this together, Sudarshan. A very timely post, found it very stimulating. Sharing it around with all my contacts ...
Executive Director- IT Services at Axis Capital
1dThank you for sharing such an insightful perspective
Senior Assistant Editor - The Economic Times
1dVery compelling!
AGM - IT Business Unit
1dGood research and insightful information. Having a robust and effective legislation for AI is a must. AI needs to be regulated so that you can have a deterant on misuse of this powerful phenomenon.
Scientist at TCS Research
2dInteresting article Sudarshan