🚨AI&Tech Legal Digest - August 23, 2024
Hi there! 👋 Thanks for stopping by. Welcome to the AI&Tech Legal Digest 🙌
Here, I bring you the latest key legal news and trends in the AI and tech world. Stay updated with expert insights, state approaches, regulations, key recommendations, safeguards, and practical advice in the tech and AI field by clicking the "subscribe" button above ✅ Enjoy your read!
Data Breach Hits EU Parliament: Privacy Group Files GDPR Complaints
Austrian privacy advocacy group NOYB has filed two complaints with the European Data Protection Supervisor (EDPS) against the European Parliament, alleging GDPR violations following a significant data breach. The incident, revealed in May 2024, compromised sensitive information of approximately 8,000 job applicants for temporary positions, including parliamentary assistants and contractual agents. The breach affected the 'PEOPLE' external application system, exposing a wide range of personal data such as ID cards, passports, and documents revealing sexual orientation.
NOYB's complaints focus on the Parliament's mishandling of information leading to the breach and its refusal to delete data upon request. The origin of the breach, which may have persisted for months, remains unidentified. This incident follows a series of cybersecurity issues in EU institutions, raising concerns about the Parliament's security measures. The EDPS will now investigate the matter, potentially leading to corrective measures or, in case of significant violations, referral to the Court of Justice of the European Union.
The breach underscores the ongoing challenges faced by EU institutions in safeguarding sensitive data, particularly given their high-profile status as potential targets for malicious actors. As Lorea Mendiguren, a data protection lawyer at NOYB, pointed out, the Parliament has a particular obligation to ensure robust security measures. This incident serves as a stark reminder of the critical importance of data protection and cybersecurity in governmental institutions, especially those handling sensitive information of EU citizens and job applicants.
UK Halts App Store Probes: New Digital Competition Powers on the Horizon
The UK's Competition and Markets Authority (CMA) has temporarily closed its investigations into Apple and Google's app store practices without taking action, citing the imminent arrival of new digital competition legislation. Launched in 2021 and 2022 respectively, these probes targeted concerns over restricted consumer choice and higher prices due to stringent app distribution rules. However, the CMA views this closure as strategic, awaiting the implementation of enhanced regulatory powers to address app store concerns more effectively.
Will Hayter, CMA's executive director for digital markets, emphasized the critical importance of a fair and competitive app ecosystem for UK tech businesses and developers. The new legislation, passed in May, will empower the CMA to set specific requirements for companies with strategic market advantages in the digital sphere. This move reflects the UK's recognition that existing competition frameworks are ill-equipped to tackle the unique challenges posed by rapidly evolving digital markets dominated by a few powerful firms.
The CMA's decision also included rejecting Google's proposed changes to its billing system, deeming them insufficient to address competition concerns. As the UK prepares to implement its new digital market regulations, the tech industry awaits potential renewed scrutiny of app store practices under a more robust regulatory framework.
OpenAI Expands Media Partnerships: Condé Nast Joins AI Content Integration Wave
OpenAI has announced a multi-year partnership with Condé Nast, allowing the AI company to incorporate content from renowned publications like Vogue and The New Yorker into its products, including ChatGPT and the SearchGPT prototype. This deal follows similar agreements with other major media outlets such as Time magazine, Financial Times, and Axel Springer, highlighting OpenAI's ongoing efforts to secure quality content for AI model training.
The partnership aims to address the challenges faced by news and digital media in monetizing content in the tech-dominated landscape. Brad Lightcap, OpenAI's COO, emphasized the company's commitment to maintaining accuracy, integrity, and respect for quality reporting as AI's role in news discovery grows. However, the AI firm still faces legal challenges, with some media organizations, including the New York Times, pursuing copyright infringement lawsuits.
This collaboration not only provides OpenAI with valuable training data but also offers media companies like Condé Nast a new revenue stream in the evolving digital ecosystem. As OpenAI continues to develop its AI-powered search capabilities, these partnerships could play a crucial role in shaping the future of information access and discovery, potentially challenging Google's long-standing dominance in the search market.
Authors Launch Class-Action Suit Against Anthropic Over AI Training Data
Three authors have filed a class-action lawsuit against AI company Anthropic in California federal court, alleging copyright infringement in the training of its chatbot, Claude. Writers Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson claim Anthropic used pirated versions of their books, along with hundreds of thousands of others, to teach Claude to respond to human prompts.
This legal action joins a growing wave of copyright disputes in the AI sector, with similar suits filed against OpenAI and Meta Platforms. It marks the second lawsuit against Anthropic, following a case brought by music publishers in 2023. The authors assert that Anthropic has "built a multibillion-dollar business by stealing hundreds of thousands of copyrighted books," despite the company's backing from tech giants like Amazon and Google.
The lawsuit seeks unspecified monetary damages and an injunction to prevent Anthropic from further misuse of the authors' work. As AI companies face increasing scrutiny over their training data sources, this case could set important precedents for intellectual property rights in the age of generative AI. Anthropic acknowledges the lawsuit but declines further comment, citing pending litigation.
Dutch Copyright Group Blocks AI Training Dataset, Raising Global IP Concerns
BREIN, a Dutch copyright enforcement organization, has successfully taken down a large language dataset intended for AI model training, marking a significant development in the ongoing debate over intellectual property rights in AI. The dataset, which included unauthorized content from thousands of books, news sites, and Dutch-language subtitles from numerous films and TV series, was removed following a cease and desist order.
This action highlights the growing global tension between AI development and copyright protection. While the extent of the dataset's use by AI companies remains unclear, BREIN's preemptive strike aims to prevent future legal complications. The incident draws parallels to similar cases worldwide, including OpenAI's legal challenges in the U.S. and the Danish Rights Alliance's takedown of the "Books3" dataset.
As the EU prepares to implement its AI Act, requiring companies to disclose their training datasets, this case underscores the increasing scrutiny of AI training data sources. It also signals a potential shift in how AI companies may need to approach data acquisition and usage, balancing innovation with respect for intellectual property rights in the rapidly evolving AI landscape.
X Faces EU-Wide Scrutiny Over AI Training Data Practices
Austrian privacy advocacy group NOYB has filed complaints against X (formerly Twitter) with data protection authorities in nine EU countries, accusing the platform of illegally using users' personal data to train its AI systems without consent. This multi-pronged approach aims to intensify pressure on Ireland's Data Protection Commission (DPC), the lead EU regulator for many major tech companies.
The complaints follow recent developments where X agreed to temporarily halt AI training using EU users' data collected before consent options were available. However, NOYB, led by privacy activist Max Schrems, argues that the DPC's current focus on mitigation measures and X's cooperation doesn't adequately address the fundamental issue of data processing legality under GDPR.
Recommended by LinkedIn
This case highlights the growing tension between rapid AI development and EU privacy laws, echoing similar challenges faced by other tech giants like Meta. As regulatory scrutiny intensifies, the outcome could set important precedents for how social media platforms and AI companies handle user data in the EU, potentially reshaping the landscape of AI development and data privacy in the region.
OpenAI Challenges California's AI Safety Bill, Citing Innovation Concerns
OpenAI has voiced opposition to California's SB 1047, a bill aimed at imposing new safety requirements on AI companies, in a letter to State Senator Scott Wiener. The San Francisco-based AI leader argues that the legislation could hinder innovation and compromise US competitiveness in AI development, echoing concerns raised by other tech industry players.
SB 1047 would require AI companies to implement measures preventing "critical harm" from their large language models, including potential misuse for bioweapons or significant financial damage. OpenAI contends that such regulation should be addressed at the federal level, emphasizing the need for a unified approach to maintain America's AI edge against global competitors like China.
The bill has sparked a broader debate within the tech community, with critics, including former House Speaker Nancy Pelosi, arguing it could drive AI companies out of California. Despite recent amendments to address some concerns, opposition remains strong. Supporters, including Senator Wiener, maintain that the bill is a reasonable approach to foreseeable AI risks.
As the California State Assembly prepares to vote on SB 1047 this month, the outcome could significantly impact the future of AI development and regulation in the state, potentially setting precedents for national AI governance strategies.
Google's AI Search Sparks Data Dilemma for Publishers
Google's integration of AI-powered summaries in search results has created a contentious situation for website owners and publishers. The tech giant's new AI Overviews feature, which displays concise answers at the top of search pages, is raising concerns about potential traffic loss for content creators.
Key points of the dilemma:
Google Faces Revived Class Action Lawsuit Over Chrome Data Collection
A federal appeals court has revived a class action lawsuit against Google, alleging unauthorized data collection through its Chrome browser. The suit, originally filed in 2020, claims Google gathered user information without consent, even when Chrome's sync feature was disabled.
Key points of the case:
This development reopens the debate on tech companies' data collection practices and the effectiveness of their privacy disclosures to users.
Shein Files Countersuit Against Rival Temu Amid Escalating Legal Battle
Fast-fashion giants Shein and Temu are embroiled in a new legal confrontation, with Shein filing a lawsuit against its competitor Temu. This action follows Temu's December lawsuit against Shein, intensifying the rivalry between the two budget e-commerce platforms.
Key points of the legal dispute:
This lawsuit adds another layer to the ongoing legal and competitive tensions between these rapidly growing e-commerce platforms.
In this fast-paced, ever-changing world, it can be challenging to stay updated with the latest legal news in the AI and tech sectors. I hope you found this digest helpful and were able to free up some time for the real joys of life.
Don't forget to subscribe to receive the weekly digest next Friday.
Anita