🚨AI&Tech Legal Digest - September 13, 2024
Hi there! 👋 Thanks for stopping by. Welcome to the AI&Tech Legal Digest 🙌
Here, I bring you the latest key legal news and trends in the AI and tech world. Stay updated with expert insights, state approaches, regulations, key recommendations, safeguards, and practical advice in the tech and AI field by clicking the "subscribe" button above ✅ Enjoy your read!
EU Court Upholds $2.7 Billion Antitrust Fine Against Google
The Court of Justice of the European Union (CJEU) has dealt a significant blow to Google, rejecting the tech giant's appeal against a €2.42 billion ($2.7 billion) antitrust fine imposed by EU regulators in 2017. The fine, one of three substantial penalties levied against Google for anti-competitive practices, stemmed from the company's alleged misuse of its dominant market position to favor its own price comparison shopping service over smaller European competitors.
The CJEU's ruling underscores a critical distinction in EU competition law: while holding a dominant market position is not inherently illegal, abusing that position to hinder fair competition is prohibited. This decision not only reaffirms the EU's stance on protecting market competition but also sets a precedent for ongoing antitrust cases against Big Tech. With Google facing additional challenges to its Android and AdSense rulings, and fresh charges threatening its adtech business, this verdict signals a continuing struggle between tech giants and EU regulators in shaping the future of digital markets.
EU Court Upholds €13bn Tax Bill Against Apple in Landmark Ruling
The European Court of Justice (ECJ) has delivered a significant blow to Apple, overturning a previous lower court decision and reinstating the European Commission's 2016 order for the tech giant to pay €13 billion in back taxes to Ireland. This landmark ruling marks a major victory for EU Competition Chief Margrethe Vestager and bolsters the Commission's efforts to crack down on preferential tax arrangements between multinational corporations and EU member states.
The ECJ's decision centers on the Commission's assertion that Apple received illegal state aid through favorable tax rulings from Irish authorities, effectively paying a tax rate of just 0.005% in 2014. By rejecting Apple's arguments and supporting the Commission's stance, this ruling not only challenges the tech industry's tax practices but also sets a precedent for future cases involving corporate tax arrangements within the EU. As the dust settles on this long-running legal battle, the focus now shifts to the broader implications for international tax policy, corporate accountability, and the balance of power between global tech firms and regulatory authorities.
This ruling comes at a crucial time, coinciding with a separate ECJ decision upholding a €2.4 billion antitrust fine against Google. Together, these verdicts signal a strengthening of the EU's regulatory stance against Big Tech. While Apple maintains that the case was about which government should receive the tax payments rather than the amount owed, the Irish government has pledged to respect the court's findings. As multinational corporations grapple with evolving tax landscapes and increased scrutiny, this decision may prompt a reevaluation of corporate tax strategies and foster more transparent fiscal practices within the European single market.
Global Summit Advances AI Guidelines for Military Use; China Abstains
In a significant development for international military ethics, approximately 60 countries, including the United States, have endorsed a "blueprint for action" at the Responsible AI in the Military Domain (REAIM) summit in Seoul. This non-binding document outlines principles for the responsible use of artificial intelligence in military applications, marking a concrete step forward from last year's more generalized "call to action" in The Hague.
The blueprint addresses crucial concerns such as preventing AI-assisted proliferation of weapons of mass destruction, maintaining human control over nuclear weapons, and implementing risk assessment protocols. However, the absence of China's endorsement, despite its attendance at the summit, underscores the ongoing challenges in achieving global consensus on AI governance in military contexts. As nations grapple with the rapid advancement of AI technologies in warfare, exemplified by Ukraine's deployment of AI-enabled drones, this blueprint represents a critical effort to establish international norms and safeguards in an increasingly complex military landscape.
By emphasizing human control, risk assessments, and confidence-building measures, the blueprint aims to create a framework for responsible AI use that balances technological advancement with ethical considerations. This multi-stakeholder approach, co-hosted by the Netherlands, Singapore, Kenya, and the United Kingdom, seeks to ensure that discussions on military AI are not dominated by a single nation or entity, fostering a more inclusive and diverse dialogue on this critical issue.
Meta's AI Data Practices Spark Privacy Concerns: Australians Left Without Opt-Out Option
Meta, the parent company of Facebook and Instagram, has come under scrutiny for its practice of using public user content to train artificial intelligence models, with a stark disparity in user rights between Europe and Australia. During a parliamentary committee hearing, it was revealed that while European users can opt out of having their data used for AI training due to GDPR regulations, Australian users are not afforded the same privilege.
Meta's director of privacy policy, Melinda Claybaugh, confirmed that the company's AI training draws from public posts and photos of users over 18, but declined to commit to offering an opt-out option for Australians in the future. This revelation has raised significant concerns about digital privacy rights and data sovereignty, with Labor senator Tony Sheldon and Greens senator David Shoebridge highlighting the lack of explicit consent from millions of Australian users whose personal content is being used to train AI models.
The controversy underscores a growing global divide in data protection standards and user rights. As governments worldwide grapple with regulating AI and data usage, this case exemplifies the challenges in balancing technological advancement with individual privacy rights. The Australian government now faces pressure to address this disparity, potentially leading to new legislative measures to protect citizens' data and align with more stringent international standards like the GDPR.
US Senators Call for Antitrust Probe into AI Summarizers
A group of Democratic Senators, led by Amy Klobuchar, has urged the Federal Trade Commission (FTC) and Department of Justice (DOJ) to investigate whether AI-powered summarization features on search platforms violate U.S. antitrust laws. This move highlights growing concerns over the impact of generative AI on content creators, journalists, and the broader digital ecosystem.
The Senators argue that AI summarizers, such as Google's AI Overviews and Search GPT, potentially harm content creators by repurposing their work without compensation. They contend that these features disrupt the traditional model where search engines directed traffic to original content sources, instead providing AI-generated summaries that may reduce visits to source websites. This shift, they claim, could have a "potentially devastating impact" on news organizations and content creators, exacerbating the already challenging landscape for journalism.
This request for an antitrust investigation underscores the complex interplay between technological innovation and fair competition in the digital age. As AI continues to reshape how information is accessed and consumed online, regulators face the challenging task of balancing the benefits of AI-driven search capabilities with the need to protect content creators and maintain a diverse, competitive digital marketplace. The outcome of any potential investigation could have far-reaching implications for the future of AI in search engines and the sustainability of digital content creation.
Australia Takes Tough Stance on Misinformation with New Legislation
Australia is set to introduce groundbreaking legislation that could impose fines of up to 5% of global revenue on social media giants failing to curb the spread of misinformation. This bold move, part of a broader regulatory crackdown, positions Australia at the forefront of global efforts to hold tech platforms accountable for content moderation and user safety.
The proposed bill, to be presented in parliament, requires tech companies to establish regulator-approved codes of conduct for preventing the spread of dangerous falsehoods. Targeting content that threatens election integrity, public health, or could incite harm against individuals or groups, the legislation aims to strike a balance between combating misinformation and protecting free speech. However, it has sparked debate among industry stakeholders, with concerns raised about potential overreach and the impact on legitimate political discourse.
As this legislation unfolds, it sets a significant precedent for how nations can assert sovereignty over digital spaces dominated by global tech giants. With the backdrop of an upcoming federal election and growing concerns about the influence of misinformation on democratic processes, Australia's approach could serve as a model for other countries grappling with similar challenges in the digital age. The tech industry's response and the practical implementation of these regulations will be closely watched by policymakers and digital rights advocates worldwide.
Recommended by LinkedIn
EU Regulator Launches Probe into Google's AI Data Practices
Ireland's Data Protection Commission (DPC), the lead privacy watchdog for major tech companies in the European Union, has initiated an investigation into Google's compliance with EU data protection laws in the development of its AI models. The inquiry specifically focuses on the tech giant's Pathways Language Model 2 (PaLM 2), questioning whether Google adequately safeguarded EU users' personal data before utilizing it for AI development.
This probe marks a significant escalation in regulatory scrutiny of AI technologies within the EU, forming part of a broader effort by data protection authorities to ensure compliance with the General Data Protection Regulation (GDPR) in the rapidly evolving field of artificial intelligence. The investigation follows recent actions against other tech platforms, including X (formerly Twitter), which agreed to cease training its AI systems on EU users' data without explicit consent.
As the AI landscape continues to expand, this inquiry underscores the growing tension between technological innovation and data privacy rights. With the EU positioning itself as a global leader in tech regulation, the outcome of this investigation could have far-reaching implications for how AI companies handle user data and develop their models. It also highlights the increasing pressure on tech giants to balance their AI ambitions with stringent data protection standards, potentially shaping the future of AI development and deployment worldwide.
EU Consumer Groups Target Videogame Giants Over Deceptive Practices
In a significant move against the videogame industry, the European Consumer Organisation (BEUC) and 22 of its member organizations have filed a complaint with European Union regulators, accusing major gaming companies of misleading consumers, particularly children, into excessive spending. The complaint targets industry giants including Epic Games, Electronic Arts, Roblox, Microsoft's Activision Blizzard, Mojang Studios, Supercell, and Ubisoft.
The consumer groups allege that these companies employ deceptive tactics, especially in the use of premium in-game currencies, which obscure the real cost of digital items and exploit children's vulnerability. This action highlights growing concerns about gaming addiction and predatory monetization strategies in the industry. The BEUC is calling for regulators to enforce "real-world rules" in virtual gaming environments, emphasizing the need for transparency in pricing and consumer rights protection.
This complaint represents a pivotal moment in the ongoing debate over the ethics of videogame monetization and the industry's responsibility towards its younger audience. As EU authorities consider this complaint, it could potentially lead to stricter regulations on in-game purchases and currency systems, reshaping how videogame companies operate in the European market. The industry's response, defending their practices as well-established and understood by players, sets the stage for a complex regulatory battle that could have far-reaching implications for the global gaming landscape.
Google Faces Second Major Antitrust Trial Over Digital Ad Dominance
The U.S. Department of Justice has launched its second antitrust trial against Google, focusing on the tech giant's alleged monopolization of the digital advertising industry. This case, which began on September 9th in Virginia, follows closely on the heels of Google's recent loss in a landmark antitrust case concerning its online search practices.
At the heart of the DOJ's argument is the claim that Google has built a "trifecta of monopolies" through strategic acquisitions and anticompetitive practices in the ad tech space. The government alleges that Google's control over various facets of digital advertising—from ad exchanges to tools for both publishers and advertisers—allows it to unfairly dominate the market, leading to higher costs for advertisers and lower revenues for publishers.
The trial's outcome could have far-reaching implications for Google's primary revenue source and the broader digital advertising ecosystem. Unlike the previous case, the DOJ is seeking specific remedies, including the potential breakup of parts of Google's advertising technology business. With high-profile witnesses from major publishers and ad tech companies set to testify, this trial promises to shed light on the complex and often opaque world of online advertising, potentially reshaping the digital landscape for years to come.
White House Secures Voluntary AI Commitments to Combat Deepfake Abuse
In a significant step towards addressing the ethical challenges posed by artificial intelligence, the White House has announced voluntary commitments from several major AI companies to combat nonconsensual deepfakes and child sexual abuse material (CSAM). Tech giants including Adobe, Microsoft, Anthropic, OpenAI, and data provider Common Crawl have pledged to take responsible measures in sourcing and safeguarding datasets used for AI training.
Key commitments include:
While these commitments represent progress in addressing AI-related ethical concerns, it's important to note that they are self-policed and not all AI vendors have participated. The absence of companies like Midjourney and Stability AI from this initiative highlights the challenges in achieving industry-wide standards.
This development underscores the ongoing struggle to balance technological innovation with ethical considerations in AI development. As the field continues to evolve rapidly, the effectiveness of voluntary commitments versus regulatory measures remains a topic of debate among policymakers, industry leaders, and ethics advocates.
UK Proposes New Legal Protections for Digital Assets
The UK government has introduced a groundbreaking bill to Parliament that aims to provide new legal protections for digital assets such as cryptocurrencies, non-fungible tokens (NFTs), and carbon credits. This Property (Digital Assets etc) Bill seeks to legitimize digital assets as "personal property," bringing them in line with traditional assets under UK law.
The proposed legislation responds to a 2023 Law Commission report highlighting the need to update current legal provisions around personal property rights in light of technological advancements. If passed, the bill would create a third category of personal property alongside existing "things in possession" (tangible goods) and "things in action" (intangible assets like shares and intellectual property).
This new legal framework would have significant implications for various legal proceedings involving digital assets, including:
While the bill aims to cover a broad spectrum of digital assets, the primary focus appears to be on crypto tokens like cryptocurrencies and NFTs. The legislation is still in its early stages, with debates and iterations to come in both the House of Lords and House of Commons. If passed, it could significantly enhance the legal standing of digital assets in the UK, potentially influencing global approaches to regulating this rapidly evolving sector.
In this fast-paced, ever-changing world, it can be challenging to stay updated with the latest legal news in the AI and tech sectors. I hope you found this digest helpful and were able to free up some time for the real joys of life.
Don't forget to subscribe to receive the weekly digest next Friday.
Anita