🚨AI&Tech Legal Digest || October 25, 2024
Hi there! 👋 Thanks for stopping by. Welcome to the AI&Tech Legal Digest 🙌
Here, I bring you the latest key legal news and trends in the AI and tech world. Stay updated with expert insights, state approaches, regulations, key recommendations, safeguards, and practical advice in the tech and AI field by clicking the "subscribe" button above ✅ Enjoy your read!
LinkedIn Hit with €310M GDPR Fine Over EU Data Processing Violations
Ireland's Data Protection Commission has imposed a €310 million ($335 million) fine on LinkedIn for illegally processing EU users' personal data for targeted advertising purposes. The penalty, which ranks as the sixth-largest under GDPR since its 2018 implementation, comes with a mandate requiring the Microsoft-owned professional networking platform to bring its data processing practices into compliance with EU regulations.
The regulatory action stems from a French complaint that triggered an Irish investigation into LinkedIn's data practices. Deputy Commissioner Graham Doyle characterized the company's unauthorized data processing as a "clear and serious violation" of fundamental data protection rights. LinkedIn has responded by stating that the case relates to 2018 advertising practices, while maintaining their belief in GDPR compliance.
This enforcement action is part of a broader EU crackdown on Big Tech companies' data practices, following similar penalties against other social media platforms. Notable among these was Meta's record €1.2 billion fine in May 2023 for US data transfers and TikTok's €345 million penalty for children's data handling. As LinkedIn's European headquarters is based in Ireland, the Irish regulator holds primary responsibility for enforcing EU rules against the company.
AI Chatbot Company Character.AI Faces Lawsuit Over Teen's Suicide
A Florida mother has filed a groundbreaking lawsuit against Character.AI and Google following her 14-year-old son's suicide, alleging the AI chatbot company's service directly contributed to his death. The lawsuit, filed in Orlando federal court, claims the company targeted Sewell Setzer with inappropriate content and created chatbots that misrepresented themselves as real people, including a licensed psychotherapist and an adult partner.
Character.AI has responded by expressing condolences and implementing new safety features, including suicide prevention resources and enhanced content restrictions for users under 18. The company's chatbot service, which allows users to create characters that engage in conversations using large language model technology, is now under scrutiny for its role in facilitating potentially harmful interactions with minors.
The lawsuit extends to Google, claiming the tech giant's involvement as a "co-creator" due to its extensive contribution to Character.AI's technology development and subsequent hiring of the company's founders. However, Google has denied any involvement in developing Character.AI's products, despite having secured a non-exclusive license to the company's technology through a deal that included hiring its founders in August.
Google Faces Potential Breakup Following Landmark US Antitrust Ruling
In a pivotal antitrust case, US District Judge Amit Mehta has ruled that Google illegally monopolized the search market, leading the Justice Department and state attorneys general to consider requesting a forced breakup of the tech giant. The ruling found that Google's $26 billion in payments to make its search engine the default option on devices effectively blocked market competition.
The case will proceed to a remedies trial in April 2025, where authorities are weighing several significant options, including requiring Google to divest key assets such as the Android operating system, Chrome browser, or Google Play Store. Additional proposed remedies include mandatory data sharing with competitors and restrictions on exclusive contracts. The Justice Department is also considering measures to prevent Google from leveraging its search dominance in AI development.
Google plans to appeal the ruling, maintaining that its default search agreements are comparable to retail shelf-space arrangements and emphasizing the judge's acknowledgment of its superior product quality. This case represents one of several antitrust challenges facing Google, including separate actions over its advertising technology, app distribution, and Play Store practices. The outcome could result in the largest forced corporate breakup since AT&T's dissolution in 1984.
UK Competition Watchdog Launches Investigation into Google-Anthropic Partnership
Britain's Competition and Markets Authority (CMA) has initiated a formal investigation into Alphabet's strategic partnership with AI startup Anthropic, marking a significant regulatory scrutiny of big tech investments in AI companies. The probe focuses on Alphabet's substantial financial commitment, including an initial $500 million investment and a promised additional $1.5 billion, alongside the company's provision of Google Cloud services to Anthropic. The CMA has set December 19 as the deadline for its Phase 1 decision, which will determine whether a deeper investigation is warranted.
Both companies have defended their partnership, with Anthropic emphasizing its operational independence and ability to partner with multiple providers. An Anthropic spokesperson stressed that investor relationships do not compromise their corporate governance, while Alphabet maintained its commitment to an open AI ecosystem, stating that Anthropic retains the freedom to use multiple cloud providers. This investigation reflects growing regulatory concerns about potential market concentration in the AI sector, following similar scrutiny of partnerships between major tech companies and AI startups globally.
White House Issues Historic Mandate for AI Adoption in Defense and Intelligence Sectors
The Biden administration has issued a landmark national security memorandum directing the Pentagon and intelligence agencies to accelerate their adoption of artificial intelligence while establishing clear boundaries for its use. The directive, released Thursday, represents the nation's first comprehensive strategy for leveraging AI in national security, aiming to maintain U.S. technological supremacy while ensuring alignment with democratic values.
The memorandum establishes specific guardrails, prohibiting the use of AI for tracking Americans' free speech or circumventing nuclear weapons controls, while mandating agencies to monitor risks related to privacy, discrimination, and human rights. National Security Adviser Jake Sullivan emphasized the "breathtaking" pace of AI advancement and its potential impact across critical domains including nuclear physics, rocketry, and stealth technology.
The initiative reflects broader U.S. efforts to compete with China in the AI space, particularly in military applications where AI could enhance capabilities in areas such as satellite imagery analysis and autonomous drone operations. The directive also addresses supply chain security for critical AI components, notably high-end computer chips, while providing support for U.S. companies to protect their AI technology from foreign espionage.
Intel Prevails in Landmark EU Antitrust Case After 20-Year Legal Battle
The European Union's highest court has delivered a final ruling in favor of Intel, concluding a marathon antitrust dispute that began nearly two decades ago. The Court of Justice of the European Union dismissed the European Commission's appeal against a lower court's decision, effectively ending the case that originally resulted in a €1.06 billion ($1.14 billion) fine against the U.S. chipmaker for alleged anticompetitive practices.
The case centered on Intel's rebate program offered to major computer manufacturers including Dell, Hewlett-Packard, NEC, and Lenovo, which the Commission had initially deemed anticompetitive and designed to block rival Advanced Micro Devices. The court's decision came after an adviser to the court earlier this year stated that regulators had not properly performed an economic analysis of the rebates' effects, highlighting a crucial requirement for proving anticompetitive behavior in such cases.
The ruling sets a significant precedent for how regulators must approach cases involving rebates offered by dominant companies. While regulators generally oppose such practices due to potential anticompetitive effects, the court's decision emphasizes that enforcers must prove that discounts have actual anticompetitive effects before companies can be sanctioned, rather than merely presuming their harmful nature.
US Regulators Fine Goldman Sachs and Apple $89M Over Apple Card Violations
The U.S. Consumer Financial Protection Bureau (CFPB) has imposed an $89 million penalty on Goldman Sachs and Apple for significant violations in their joint credit card operations. The enforcement action addresses multiple consumer protection failures, including mishandling of transaction disputes, misleading practices regarding interest-free purchases, and improper processing of customer complaints that affected hundreds of thousands of cardholders.
Recommended by LinkedIn
Goldman Sachs will bear the majority of the penalty, paying $19.8 million in consumer redress and a $45 million fine, while Apple is required to pay $25 million. The CFPB's investigation revealed that Apple failed to forward tens of thousands of consumer disputes to Goldman Sachs, and when disputes were transmitted, the bato Goldman Sachs, and when disputes were transmitted, the bank failednk failed to process them according to legal requirements, resulting in extended refund delays and damaged credit histories for customers.
The investigation also uncovered that customers were misled about interest-free payment options for Apple devices, with these options often only available when using Apple's Safari browser. As part of the settlement, Goldman Sachs faces restrictions on issuing new credit cards and must submit compliance plans 90 days in advance of any new credit card launches, while Apple must develop a plan to ensure compliance with consumer protection laws in its ongoing financial services operations.
Meta and Zuckerberg Win Dismissal of Child Safety Disclosure Lawsuit
A federal judge in San Francisco has dismissed a shareholder lawsuit against Meta Platforms and CEO Mark Zuckerberg that alleged misleading proxy statement disclosures regarding child safety measures on Facebook and Instagram. U.S. District Judge Charles Breyer ruled that the plaintiff, Matt Eisner, failed to demonstrate economic losses resulting from Meta's alleged inadequate disclosures about child protection strategies.
The court held that federal securities law did not require Meta to provide detailed information about sexually explicit content, child exploitation on its platforms, or rejected child protection strategies. Judge Breyer's dismissal with prejudice, which prevents refiling of the lawsuit, reinforced his earlier June ruling that many of Meta's statements about child safety commitment were "aspirational" and insufficient to support legal action.
Despite this victory, Meta continues to face significant legal challenges, including lawsuits from multiple state attorneys general alleging the company's role in child social media addiction. The company, along with other social media platforms like TikTok and Snapchat, is also defending against hundreds of lawsuits filed by children, parents, and school districts over social media addiction concerns.
Alcon Entertainment Files Lawsuit Over AI-Generated 'Blade Runner' Images in Tesla's Cybercab Launch
Alcon Entertainment has initiated legal action against Tesla and Warner Bros Discovery, alleging copyright infringement and false endorsement related to Tesla's recent cybercab unveiling. The lawsuit claims that after being denied permission to use "Blade Runner 2049" imagery, Tesla proceeded to utilize AI-generated images that closely mimicked the film's distinctive visual style during its October 10 product launch event.
The production company, which owns rights to the Blade Runner franchise, argues that Tesla's unauthorized use of AI-generated imagery could create confusion among Alcon's brand partners, particularly as it prepares to launch "Blade Runner 2099" on Amazon Prime. Alcon emphasizes that it had invested hundreds of millions of dollars in building the Blade Runner 2049 brand, characterizing the alleged misappropriation as financially substantial.
The lawsuit specifically targets both Tesla and Warner Bros Discovery, the latter being Alcon's original distributor for "Blade Runner 2049." Alcon's complaint notably references concerns about association with Elon Musk's public behavior, stating that potential brand partners must consider his "highly politicized, capricious and arbitrary behavior" when contemplating Tesla partnerships.
Google Enhances AI Photo Editing Transparency with New Disclosure Features
Google is implementing new transparency measures for AI-edited photos in its Google Photos app, adding explicit disclosures for images modified using tools like Magic Editor, Magic Eraser, and Zoom Enhance. Starting next week, users will find a new "Edited with Google AI" indicator in the Details section of edited photos, expanding upon existing metadata tags.
The update comes in response to concerns about the proliferation of AI-modified content, particularly following the launch of the Pixel 9 phones with their advanced AI editing capabilities. While Google frames this as a step toward improved transparency, critics note that the disclosure remains relatively hidden from immediate view, as it requires users to actively check the Details section rather than providing visible in-frame watermarks.
The company's approach relies heavily on metadata and platform-level identification of AI-generated content, similar to Meta's practices on Facebook and Instagram. However, this solution faces limitations as not all platforms consistently display or transfer such metadata, potentially leaving viewers unable to distinguish between natural and AI-modified images during casual browsing. Google has indicated plans to expand AI image flagging to Search later this year, though questions remain about the effectiveness of metadata-based disclosure in preventing potential misuse or misunderstanding of synthetic content.
UK Unveils Ambitious Data Reform Bill with Focus on Economic Growth and Privacy Balance
The UK Department for Science, Innovation and Technology has introduced the Data (Use and Access) Bill, a comprehensive legislation aimed at modernizing data protection rules while maintaining essential privacy safeguards. The new bill, projected to generate £10 billion in economic benefits through public sector efficiencies, represents a significant shift from previous post-Brexit reform proposals.
The legislation introduces several key measures, including streamlined data sharing across healthcare and law enforcement, expanded smart data schemes, and mandatory data retention requirements for online platforms regarding minors' deaths. Notably, it includes provisions for online safety researchers to access platform data, aligning with EU's Digital Services Act standards, while maintaining compatibility with EU data protection requirements ahead of the 2025 adequacy decision review.
However, the bill has drawn mixed reactions from experts and privacy advocates. While some praise its more measured approach to GDPR reform compared to previous proposals, organizations like the Open Rights Group warn about potential weaknesses in automated decision-making protections and ICO independence. The legislation also proposes significant changes to privacy notices and cookie consent regulations, including allowing first-party cookies for analytics without user consent and increasing fines for privacy violations to match GDPR levels.
Google Open Sources AI Text Watermarking Technology SynthID Text
Google has made its AI text watermarking technology, SynthID Text, generally available to developers and businesses through Hugging Face and the company's Responsible GenAI Toolkit. The system works by embedding detectable patterns in AI-generated text without compromising content quality or generation speed, marking a significant step toward transparency in AI content creation.
The technology functions by modifying the probability distribution of tokens (characters or words) during the text generation process, creating a unique watermark pattern that can be detected even if the text is cropped, paraphrased, or modified. However, Google acknowledges certain limitations, particularly with short texts, translated content, and factual responses where token distribution adjustments could affect accuracy.
This release comes at a critical time as synthetic content proliferation accelerates, with estimates suggesting that 90% of online content could be AI-generated by 2026. While China has already mandated AI content watermarking and California is considering similar legislation, the technology's effectiveness will largely depend on widespread adoption across the industry. Currently, other major players like OpenAI are also developing watermarking solutions, though implementation timelines vary.
In this fast-paced, ever-changing world, it can be challenging to stay updated with the latest legal news in the AI and tech sectors. I hope you found this digest helpful and were able to free up some time for the real joys of life.
Don't forget to subscribe to receive the weekly digest next Friday.
Anita