“The USA Innovates, China Replicates and Europe Regulates”.
The Artificial Intelligence Pact.
The AI Act, the European regulation on artificial intelligence, has been published in the Official Journal of the European Union. After months of anticipation, leaked texts, and insider updates, the final text of the AI Act has been officially published in the EU Official Journal.
This milestone marks a significant step forward in regulating artificial intelligence within the European Union. The comprehensive framework aims to ensure the development and deployment of AI to innovate responsibly. And it becomes law for all intents and purposes. It is scheduled to enter into force in 20 days. From which time the roadmap for the implementation of what the Commission, the Council and the European Parliament have presented as the first law in the world to deal comprehensively with artificial intelligence will start.
The AI Act will come into force from August 1st.
Background.
The European Commission has announced the AI Pact, the agreement to comply first with the AI Act, the European regulation on artificial intelligence. Already 400 companies have raised their hands to show their interest in the AI Pact (those who want to join can do so here AI Pact), anticipating the process of complying with the EU rules to be ready when they are triggered for everyone.
The Artificial Intelligence Act.
On March 13, 2024, the European Parliament voted to approve the world's first comprehensive legal framework for the regulation of artificial intelligence: the Artificial Intelligence Act (the “AI Act”). The increasing use of AI tools and the spread and wide use of generative AI such as “ChatGPT” have raised concerns about their ethical, legal and social implications. In this context, the new European regulation aims to regulate the design and use of artificial intelligence. It aims to strengthen trust in AI, control its impact on society, businesses and individuals (and in particular on their fundamental rights), while creating an environment conducive to research and development, business and innovation. Today, only 3 percent of global AI unicorns come from the EU, while private investment in AI is 14 times higher in the United States and 5 times higher in China. By 2030, the global AI market is expected to reach $1.5 trillion, and it is therefore necessary for European companies to access it without getting caught up in red tape. In light of this, the European Union has acted as a pioneer by developing the world's first AI regulation.
The Regulations.
The text, in its initial version proposed by the European Commission in 2021, responded to the growth of AI technology since the 2010s and evolved in response to the development of generative AI. The currently available consolidated version should enable AI players to anticipate what the main issues related to the implementation of the AI Act will be and to start implementing this legislation in their compliance strategy. The Regulations provide:
i. A regulatory framework for artificial intelligence;
ii. A consistent risk-based approach to determine whether an artificial intelligence system can be legitimately developed and used through a sliding scale of risks to fundamental rights;
iii. A Europe-wide harmonized definition of key concepts (particularly definitions of key artificial intelligence systems);
iv. A set of strengthened obligations for artificial intelligence actors;
v. A European governance structure with dedicated authorities;
vi. A set of penalties for non-compliance with the regulation; and
vii. A clear and detailed agenda for the entry into force and implementation of the AI Act.
Some highlights of the AI Act.
From plenary approval to implementation.
At this point the AI ACT will need to be revised before it is approved by the Council of the European Union and then finally adopted. Once formally adopted and published in the Official Journal of the European Union, it will enter into force twenty days after its publication. Once the AI law is enacted, its final requirements and obligations will enter into force in stages. Implementation will take place gradually (it will be fully implemented thirty-six (36) months after its enactment) according to the following phases:
Compliance and Sanctioning Aspects
In terms of territorial application, this regulation is in line with the main compliance regulations created by the European Union over the last two decades; therefore, like the General Data Protection Regulation ("GDPR"), it is of extraterritorial application. It will be global in scope and, therefore, create new obligations for organisations in all sectors and throughout the supply chain that market and use AI systems in the European Union. On 24 January 2024, the European Commission designated a European Office for Artificial Intelligence. Its role is to support and ensure the proper implementation of the AI law. By proceeding in this way, the EU wanted to ensure a coordinated European implementation of the future regulation.
The tasks of this office include:
1. Contributing to a strategic, coherent and effective EU approach to international initiatives on IA, in coordination with Member States and in line with EU positions and policies
2. Cooperating with all relevant EU bodies, offices and agencies
Recommended by LinkedIn
3. Coperating with Member States' authorities and bodies on behalf of the European Commission.
In terms of sanctions, the European Union has demonstrated its political will to provide itself with the means to enforce the AI law:
The most serious penalties for non-compliance amount to
- EUR 35 million;
or
- 7 per cent of the total annual worldwide turnover in the previous financial year for legal persons, whichever is higher.
SMEs and start-ups benefit from a modified sanctioning regime, whereby each sanction in Article 71 ('Sanctions') is equal to the percentage or lower amount indicated in the case of an option.
Some considerations on the AI Act.
The AI Act places restrictions on uses of artificial intelligence that pose a high risk to people's fundamental rights, such as healthcare and education for example. Certain uses that are deemed to pose an "unacceptable risk" are also prohibited. These are rather strange and ambiguous use cases, such as AI systems that employ 'subliminal, manipulative or deceptive techniques to distort behaviour and impair informed decision-making' or exploit vulnerable people.
The AI Act also bans systems that infer sensitive characteristics such as a person's political views or sexual orientation and the use of real-time facial recognition software in public places. The creation of facial recognition databases by scraping the Internet (literally scraping information), as in the case of Clearview AI, will also be banned.
However, from now on it will be more evident when interacting with an AI system, in fact:
1. Technology companies will have to tag deepfakes and AI-generated content and warn people when they interact with a chatbot or other AI system. In addition, the AI Act will require those companies that develop media using generative AI to make it possible to detect them. In other words, this is promising news in the fight against disinformation and will give a strong impetus to research on watermarking and content provenance.
2. Citizens will be able to lodge complaints if they have been harmed by the use of AI. In fact, the regulation provides for the establishment of a new European Office to coordinate compliance, implementation, and enforcement. In this regard, EU citizens will be able to lodge complaints about AI systems when they suspect they have been harmed by one and will be able to receive explanations. This is, without a doubt, an important first step to empower people in an increasingly automated world.
3. AI companies will have to be more transparent. Companies developing AI technologies in so-called "high-risk" sectors, such as critical infrastructure or healthcare or even education, will have new, increasingly stringent obligations when the law comes fully into force in three years (36 months). These include better data governance, ensuring human oversight, and assessing how these systems will affect people's rights.
Artificial intelligence and European rules, the 8 useful actions for companies.
The potential effect of generative AI on productivity is considerable, with the potential to contribute trillions of dollars to the global economy. Research indicates that generative AI is capable of generating an annual value equivalent to between $2.6 and $4.4 trillion across the 63 use cases examined. To put this into perspective, the entire GDP of the UK in 2021 was $3.1 trillion. Therefore, adding the value of generative AI could increase the overall effect of AI on GDP by 15-40%[1].
With the entry into force of the AI Act, it will therefore be necessary to put in place a series of actions that are useful for compliance and maximum transparency and that can be summarised here:
1. Mapping systems already in use that could be considered artificial intelligence systems. The definition of artificial intelligence systems is very broad and does not only include general purpose artificial intelligence systems. Internal due diligence must be performed to qualify them correctly.
2. Including an obligation to comply with the Artificial Intelligence (AI) Act in contracts with suppliers is a prudent practice to ensure legal compliance and ethics in business operations. With regard to existing contracts, it may be necessary to initiate a renegotiation phase to include these clauses or update existing contractual terms to AI regulations. This may require close cooperation with suppliers and, in some cases, legal advisors specialised in AI law or emerging technologies may also need to be involved.
3. The inclusion of contractual clauses requiring compliance with AI laws is certainly an important step to ensure compliance and mitigate legal and ethical risks. However, it is equally important to consider the need for flexibility and adaptability should there be significant changes in regulations, operational requirements or market conditions that could affect the implementation and operation of AI systems. Ultimately, it is important to strike a balance between the need for legal compliance and the need for operational flexibility when implementing and operating AI systems. A collaborative and flexible approach between contractual parties can help ensure that investments in AI are protected and that the company can adapt effectively to changes in the regulatory and operational landscape.
4. Adopting internal technical and operational policies to regulate the use of AI is crucial to ensure legal compliance, manage risks and promote ethical and responsible use of such technologies within the company. The European AI law introduces different categories of AI systems, each with specific obligations, so it is essential that companies develop internal policies adapted to their needs and the characteristics of the systems used. In summary, developing and implementing robust internal AI governance is crucial to manage the risks and maximise the benefits of adopting these technologies within the company. Effective governance requires a holistic approach that involves different business functions and takes into account AI regulations and best practices.
5. It is absolutely crucial for companies to implement solutions to ensure compliance with privacy and intellectual property regulations when using artificial intelligence (AI) systems. This is especially important considering the legal implications and potential consequences of non-compliance, such as fines, penalties and reputational damage. Implementing solutions to ensure compliance with privacy and intellectual property regulations is essential to mitigate legal risks and protect the company's reputation when using AI systems[2].
6. Adopting a specific AI Act compliance tool is key to ensuring that internal policies and procedures are effectively implemented and adhered to in the use of artificial intelligence (AI) systems. These tools can help companies systematically and thoroughly assess compliance with the provisions of the AI Act and other relevant regulations, as well as identify and mitigate risks associated with the use of AI systems. Adopting an AI Act compliance tool is essential to translate internal policies and procedures into concrete and measurable actions (KPIs) to ensure legal and ethical compliance in the use of AI systems. Ensure that the tool is properly customised, used effectively and integrated into the company's overall approach to AI governance.
7. Protecting internal trade secrets and confidential information is critical to preserving the company's competitiveness and maintaining the trust of customers and business partners. Given the sensitive nature of the information involved and the risk of unauthorised disclosure, it is essential to implement robust technical and organisational measures to prevent employee misuse and ensure information security. In this regard, protecting internal trade secrets and confidential information requires a holistic approach that combines technical and organisational measures to prevent employee abuse and ensure the security of the company's sensitive information. Collaboration between different departments and the adoption of clear policies and procedures are key to ensuring an effective compliance and information security programme.
8. Finally, employee training is a key element in fostering an AI-compliant culture within the company. Investing in internal training can help create a solid base of knowledge and skills needed to successfully drive the implementation and responsible use of AI systems.
Conclusive Remarks: Ethical and responsible development but Europe takes the lead.
In conclusion, the AI Regulation represents a milestone and reflects the EU's commitment to promote ethical and responsible development of AI. By addressing fundamental challenges such as transparency, accountability and risk management, it aims to harness the potential of AI while safeguarding fundamental rights and values. However, it should be emphasised that having endowed the European Single Market with a regulation of this kind should not hold back our companies from investing resources in this field for further development and, above all, in order not to lose pace with the United States of America and China, which, on the other hand, are racing ahead in this field. To summarise, a curious phrase is used:
"The USA Innovates, China Replicates and Europe Regulates."
One might think that Europe is not in a position to compete with the US and China in the field of AI, and that maybe that is why it was decided to regulate, playing in advance, to curb possible uses and abuses of AI. In essence, we are playing defence. One is of the opinion, however, that this approach is absolutely wrong. Rather, Europe is in a position to compete with the US and China as long as it decides to stop being the referee and field its own team.
The continuous evolution of technology, constant dialogue and collaboration between stakeholders will be essential to ensure that regulatory frameworks can be effective, as well as the spirit of initiative in shaping the future of AI without any technological gaps, which, considering the sector under discussion, could also have strong geo-political repercussions.
[1] (Antonio Lanotte, TNI - US, “From Artificial to Circular Intelligence: The Role of Generative AI”)
[2] (Antonio Lanotte, TNI - US, “Keys to Maintaining Trust and Credibility With Stakeholders”)
Thank you for sharing Antonio Lanotte