Rise of the Machines: Shaping the Legal Response to AI’s Rapid Expansion
AI: Boon or Bane? Weighing the Pros and Cons
A Goldman Sachs report indicates that #AI could automate up to a quarter of work in the US, impacting 300 million jobs globally. However, AI technology could also enhance labor productivity growth and increase global GDP by up to 7%. The legal profession is among those at the highest risk of AI #automation, with 44% of tasks potentially being automated.
In response to these concerns, over 1,000 AI experts, including Elon Musk, Emad Mostaque, and Steve Wozniak, signed an open letter advocating for an immediate six-month pause on developing “giant” AIs. The letter argues that powerful AI systems should only be developed once their positive effects are ascertained and their risks managed, with government intervention advised if necessary.
Investor Ian Hogarth expresses concern about the swift development of Artificial General Intelligence (AGI) or “God-like AI” by private companies without proper oversight or regulation. Hogarth argues that the race to develop #AGI poses significant risks to humanity, necessitating a slowdown and increased regulation, and suggests that governments should assume control by regulating access to frontier hardware.
Allegedly, Samsung Electronics employees leaked top-secret data using ChatGPT to assist them in their tasks. Confidential data, including source code, meeting notes, and optimization sequences, were inputted and subsequently fell into OpenAI ’s possession. Many companies have proceeded with total firm-wide bans on ChatGPT use.
Responding to AI’s Rapid Expansion: A Race Against Time
As prominent figures call for prompt AI regulation, how does the legal world, soon to be 44% AI-automated, respond?
In January, the Council of Europe published the revised zero draft of the #Convention on Artificial Intelligence, Human Rights, Democracy, and the Rule of Law. The draft outlines the purpose and scope of the Convention, which aims to establish fundamental principles and rules for the design, development, and application of AI systems consistent with respect for human rights, democracy, and the rule of law. The Convention sets out fundamental principles for AI systems, including equality, anti-discrimination, privacy, personal data protection, accountability, legal liability, transparency, oversight, safety, and safe innovation. It ensures accountability and redress for any harm caused.
Recommended by LinkedIn
The European Union AI Act is expected to be the landmark legislation governing AI usage in Europe. The Act will categorize AI tools based on their perceived level of risk and will apply to anyone providing a product or service utilizing AI. Users of high-risk AI systems will likely be required to undertake rigorous risk assessments and maintain records of their activities. Non-compliance with the AI Act could result in fines of up to 30 million euros or 6% of global profits, whichever is higher. Although the Act is expected to pass this year, concrete deadlines are still pending, and a grace period of approximately two years will be granted to affected parties for compliance.
The European Data Protection Board (#EDPB) has recently established a ChatGPT task force, initiating a common policy on setting privacy rules on artificial intelligence. This development follows Italy’s unilateral action to curb ChatGPT last month, with Germany and Spain’s data protection authorities considering similar measures. The EDPB aims to foster cooperation and exchange information on potential enforcement actions through the dedicated task force.
On April 17, 2023, EU lawmakers urged the European Commission and the US President ( The White House ) to convene a global summit on AI to establish governing principles for the technology. The lawmakers seek to ensure AI is human-centric, safe, and trustworthy and does not pose risks to society and the environment. They also aim to update the #EU’s approach to AI regulation, known as the AI Act, which could serve as a blueprint for other countries.
#China’s cyberspace regulator, the Cyberspace Administration of China , has revealed draft measures for managing generative artificial intelligence services. The regulator mandates that companies submit security assessments to authorities before launching their services to the public. Additionally, the regulator requires service providers to obtain users’ real identities and related information. Providers failing to comply with the rules may face fines, service suspensions, or criminal investigations. The public can comment on the proposals until May 10, and the measures are anticipated to be enacted later this year.
In the #US, the push for AI regulation is also gaining traction. United States Senate Majority Leader Chuck Schumer is spearheading the effort to establish a framework for AI regulation, focusing on a structure that can adapt to advancements in AI while balancing accountability, security, and innovation. The potential regulations will center on four guardrails: the identity of the algorithm’s trainer and its intended users; data source disclosure; explanations of AI response processes; and strong ethical boundaries. The objective is to increase AI transparency through these guidelines.
The AI Regulatory Conundrum: What Matters Most?
Compared to this slow-moving progress, the warnings are ever-increasing. A few days ago, Google chief executive Sundar Pichai discussed the potential of AI and its impact on society. He believes AI is potentially more important than discovering fire and electricity. However, he also admits that he doesn’t fully understand how it works and that its development might be beyond its creator’s control. Pichai says that AI’s potential downsides keep him up at night.
In conclusion, the growing calls for AI regulation from lawmakers and technologists underscore the urgent need to address the rapidly expanding influence of AI on our societies. As the potential negative impacts of AI, such as job displacement, the dissemination of disinformation, and the risks associated with the development of AGI, become increasingly apparent, it is crucial to accelerate regulatory efforts. While regulatory initiatives are emerging at the state level in the US and Europe, the relentless drive for innovation and efficiency propels companies to adopt AI, amplifying the risks associated with its widespread use. Now more than ever, it is imperative to strike the right balance between regulation and innovation, ensuring that AI’s transformative potential is harnessed responsibly while mitigating the inherent risks accompanying its exponential growth, including the possible emergence of AGI.