🚨AI&Tech Legal Digest || December 20, 2024

🚨AI&Tech Legal Digest || December 20, 2024

Hi there! 👋 Thanks for stopping by. Welcome to the AI&Tech Legal Digest 🙌

Here, I bring you the latest key legal news and trends in the AI and tech world. Stay updated with expert insights, state approaches, regulations, key recommendations, safeguards, and practical advice in the tech and AI field by clicking the "subscribe" button above ✅ Enjoy your read!        

House AI Task Force Charts Path Forward with New Report, Avoiding Immediate Legislative Action

The House artificial intelligence task force has released a comprehensive 253-page report outlining Congress's approach to AI development and safeguards, significantly expanding on the Senate's earlier 31-page roadmap. Led by Rep. Jay Obernolte (R-California), the bipartisan report covers AI's intersection with labor, privacy, and national security, but notably declines to endorse any specific legislation for the current Congress.

The task force signals a preference for addressing AI regulation sector by sector, rather than pursuing comprehensive legislation like the EU's Artificial Intelligence Act. Key areas identified for future attention include nonconsensual deepfakes, digital identity verification, data privacy protections, and civil rights safeguards. The group has begun coordination with the incoming administration and plans meetings with AI and crypto czar David Sacks.

Despite its extensive recommendations, the task force intentionally avoided endorsing specific bills after disagreements emerged over legislative language. Rep. Obernolte described the report as the "beginning of the conversation," with implementation left to future Congresses. Questions remain about whether the task force will continue to exist or if congressional committees will lead future legislative efforts.

Read more


UK ICO Criticizes Google's New Ad Tracking Policy, Calling Digital Fingerprinting 'Irresponsible'

Britain's Information Commissioner's Office (ICO) has strongly condemned Google's decision to allow advertisers to use digital fingerprinting for tracking users starting mid-February. The ICO's executive director for regulatory risk, Stephen Almond, labeled the move "irresponsible," highlighting concerns that this tracking method will be significantly harder for users to detect and block compared to traditional cookies.

The shift marks a notable reversal from Google's 2019 position, when the company opposed fingerprinting, stating it "subverts user choice and is wrong." Google now cites advances in privacy-enhancing technologies and the need to better serve advertisers on connected TV platforms as justification for the change. The technology combines various device signals to create unique identifiers, effectively creating a more persistent form of tracking than cookies.

The ICO emphasizes that fingerprinting must comply with data protection laws, requiring transparent deployment and user consent. The regulator warns that the practice undermines consumer control over data collection and contradicts expectations for a privacy-driven internet. Google has responded by committing to further discussions with the ICO and maintaining that users will retain choice over personalized advertising.

Read more


UK Creative Industries Unite Against AI Copyright Exemption Plan, Demand Protection of Existing Rights

A broad coalition of British creative industries, including major publishers, musicians, and media organizations, has firmly opposed the Labour government's proposal to create a copyright exemption for AI training. The Creative Rights in AI Coalition (Crac), representing thousands of creators, rejected Tuesday's ministerial plan that would allow AI companies to train on published works unless rights holders explicitly opt out.

Technology Minister Chris Bryant defended the proposal in Parliament, suggesting that strict permission requirements could disadvantage British AI developers and impact the broader economy. However, the creative sector, supported by high-profile artists like Paul McCartney and Kate Bush, insists that AI companies should proactively seek permissions and establish licensing agreements for using copyrighted materials.

The debate intensified as Lord Kidron likened the government's opt-out approach to asking shopkeepers to "opt out of shoplifters," while industry representatives argue for enforcing existing copyright laws rather than creating new exceptions. The proposal, now subject to a 10-week consultation, has drawn criticism from various quarters, including the Conservative chair of the Commons culture committee, who accused the government of having "fully drunk the Kool-Aid on AI."

Read more


German Regulator Orders World to Delete Biometric Data, Citing Privacy Violations

The Bavarian State Office for Data Protection Supervision (BayLDA) has concluded that World, Sam Altman's biometric identification project, violates EU data protection rules. Following a months-long investigation, the German watchdog ordered the company to initiate GDPR-compliant data deletion procedures, specifically targeting iris scan data collected through its 'Orb' scanning device.

World, formerly known as Worldcoin, has appealed the decision while seeking clarity on whether its Privacy Enhancing Technologies (PETs) meet EU standards for data anonymization. The company's chief privacy officer, Damien Kieran, argues that their current system no longer stores personal iris codes directly, instead using a cryptographic protocol that distributes coded data across third-party databases including universities and private entities.

The regulatory action adds to World's challenges in Europe, where Spain and Portugal have already implemented temporary bans over privacy concerns. Despite these setbacks, the company continues to operate in multiple countries including Argentina, Germany, Japan, and the US, with plans to expand into additional European markets such as Ireland, the UK, France, and Italy.

Read more


Meta Hit with €251M Fine Over Data Breach Affecting 29M Users Worldwide

Ireland's Data Protection Commission (DPC) has imposed a €251 million fine on Meta Platforms Ireland Limited following a significant data breach reported in 2018. The breach exposed personal information including names, email addresses, phone numbers, and timeline posts of approximately 29 million users globally, with 3 million affected users located in the EU and EEA region.

The watchdog determined that Meta violated GDPR regulations by failing to properly document the breach incidents and remedial actions taken. Additionally, the company was found non-compliant with obligations to ensure that only necessary personal data was processed by default. The breach originated from unauthorized third parties exploiting user tokens on Facebook, though Meta claims to have addressed the issue immediately upon discovery.

This penalty adds to Meta's mounting regulatory challenges in Europe, following a €91 million fine in September over password storage practices and a record €1.2 billion fine last year regarding US data transfers. Meta has announced its intention to appeal the decision, while maintaining that it took prompt action to fix the problem and proactively informed affected users and authorities.

Read more


EU Launches Investigation into TikTok Over Romanian Election Interference Claims

The European Commission has initiated a probe into TikTok under the Digital Services Act, examining whether the platform failed to prevent manipulation of its recommendation system and properly label political content during Romania's recent presidential election. The investigation follows the unprecedented voiding of election results where pro-Russia candidate Calin Georgescu, who gained prominence through TikTok, secured a first-round victory.

Romanian security agencies attributed the incident to a Russian hybrid attack, noting TikTok's failure to properly label election-related content. The platform has since revealed it disrupted influence networks, including over 4,000 Turkish-operated accounts that promoted Georgescu and the nationalist Alliance for the Union of Romanians party. The aftermath led to police raids, the detention of a cryptocurrency-holding businessman, and the interception of armed mercenaries allegedly planning unrest in Bucharest.

This investigation adds to TikTok's mounting regulatory challenges, including a separate EU probe over minor protection and addictive design concerns, plus an impending US ban unless ByteDance sells the platform. Under the Digital Services Act, TikTok could face fines of up to 6% of its global annual revenue if found in breach. The company maintains it's cooperating with authorities and has updated its Election Center to include official Electoral Commission information.

Read more


Turkish Regulator Hits Google with $75M Fine Over Ad Tech Market Dominance

Turkey's antitrust authority has imposed a 2.61 billion lira ($75 million) fine on Google for leveraging its market position to unfairly advantage its own supply-side platform in digital advertising. The ruling affects multiple Google entities, including Google International LLC, Google LLC, Google Ireland Ltd., and parent company Alphabet Inc.

The watchdog has given Google a six-month deadline to ensure third-party supply-side platforms (SSPs) receive equal treatment compared to its own automated ad space sales technology. Non-compliance will trigger additional daily penalties. This marks Google's second major penalty in Turkey, following a 482 million lira fine in June over its hotel search service.

The decision aligns with growing global scrutiny of Google's market practices, coming after a US court ruled Google's search engine a monopoly and amid European regulators' investigation into its advertising partnership with Meta. Google retains the right to appeal the Turkish authority's decision.

Read more


Watchdogs Order Worldcoin to Delete All Iris Scan Data

Spanish data protection authority AEPD has ordered Sam Altman's Worldcoin project to delete all iris scanning data collected since its inception, following a similar directive from Germany's Bavarian data protection watchdog (BayLDA). Both regulators determined the biometric data collection violates the EU's General Data Protection Regulation (GDPR).

The enforcement action reinforces Spain's earlier stance, where its High Court upheld a temporary ban on the iris-scanning venture in March. World (formerly Worldcoin), which maintains its European headquarters in Erlangen, Bavaria, had launched the initiative in 2019 with the aim of creating a global identity system.

The company offers cryptocurrency and digital IDs in exchange for iris scans, a practice that has sparked privacy concerns and regulatory scrutiny across multiple European jurisdictions. The project, co-founded by OpenAI CEO Sam Altman, sought to establish a global identification system through biometric data collection.

Read more


Netherlands Expands Investment Screening Law to Include AI and Biotech

The Dutch government has announced plans to expand its investment screening law to cover additional technologies including artificial intelligence, biotechnology, nanotechnology, and advanced medical nuclear technologies. Economy Minister Dirk Beljaarts cited deteriorating international security and the threat of hybrid attacks as key reasons for the expansion, which aims to protect Dutch innovations and economic interests.

The amendment, expected to be implemented in the second half of 2025, builds upon the existing investment screening law introduced last year. Under the current framework, potential acquisitions of vital Dutch infrastructure, real estate, or technology must undergo review by the Investment Review Office for security implications, with mandatory waiting periods of eight weeks to six months.

This expansion follows the Netherlands' recent implementation of restrictions on semiconductor technology exports to China, introduced under U.S. pressure. The law requires proposed acquisitions to be reported to authorities and held in standstill while security implications are assessed.

Read more


Dutch Privacy Watchdog Issues €4.75M Fine to Netflix Over Data Transparency

The Dutch Data Protection Authority (DPA) has fined Netflix 4.75 million euros ($4.98 million) for failing to adequately inform customers about personal data usage between 2018 and 2020. The investigation, launched in 2019, revealed that Netflix's privacy statement lacked clear explanations about data collection practices and failed to provide sufficient information to customers who inquired about their personal data collection.

Netflix has contested the fine, which cites violations of the General Data Protection Regulation (GDPR). The company states it has cooperated with the Dutch authority throughout the five-year investigation period and has already implemented changes to enhance privacy information transparency.

The streaming service has since updated its privacy statement and improved its information provision practices. However, despite these remedial actions, Netflix maintains its objection to the regulatory decision.

Read more


Meta Reaches A$50M Settlement in Australian Cambridge Analytica Privacy Case

Meta Platforms has agreed to pay A$50 million ($31.85 million) to settle Australia's privacy lawsuit over the Cambridge Analytica scandal, marking the largest privacy-related settlement in Australian history. The case, initiated by the Office of the Australian Information Commissioner in 2020, alleged that 311,127 Australian Facebook users' personal data was exposed to potential disclosure to Cambridge Analytica.

The settlement concludes a lengthy legal battle that included multiple court proceedings, including a March 2023 high court decision that allowed the privacy watchdog to continue its prosecution. The case centered on the "This is Your Digital Life" personality quiz app, which was part of the broader Cambridge Analytica scandal where millions of Facebook users' data was collected without permission for political advertising.

Meta has settled the lawsuit on a no-admission basis, following similar regulatory actions in other jurisdictions. The British consulting firm Cambridge Analytica had previously used the collected data for political purposes, including Donald Trump's campaign and Brexit advocacy in the UK. The case had already resulted in fines from US and UK regulators in 2019.

Read more


EU Privacy Board Issues Key Guidance on AI Development Under GDPR

The European Data Protection Board (EDPB) has released a critical opinion addressing major compliance questions for AI model development under EU privacy law. The guidance tackles three key areas: AI model anonymity, the use of legitimate interests as a legal basis for data processing, and the potential for lawful deployment of models trained on unlawfully processed data.

The opinion emphasizes case-by-case assessment rather than blanket rules, particularly regarding model anonymity. The EDPB outlines that AI models must be "very unlikely" to identify individuals from training data, suggesting various technical measures like data minimization and privacy-preserving techniques. On legitimate interests, the Board indicates this could be viable for AI development without requiring individual consent, subject to strict three-step assessment criteria.

Ireland's Data Protection Commission, set to lead GDPR oversight of OpenAI, welcomes the opinion as enabling "proactive, effective and consistent regulation" across the EU. This guidance comes amid ongoing investigations into AI platforms like ChatGPT, which face multiple GDPR complaints across Europe, with potential penalties of up to 4% of global annual turnover for non-compliance.

Read more


UK Government Proposes Opt-Out System for AI Training on Copyrighted Works

The UK government has launched a 10-week consultation on implementing an opt-out copyright regime for AI training, requiring rights holders to actively prevent their intellectual property from being used as AI training data. The proposal comes amid growing tensions between creative industries and AI companies over the use of copyrighted content in training generative AI models.

Under the proposed framework, rights holders would need to explicitly reserve their rights to control and be compensated for the use of their work in AI training. The government emphasizes that this approach would require increased transparency from AI developers about their training data sources and usage, along with the development of technical systems allowing creators to exercise their rights individually or collectively.

The consultation, running until February 25, 2025, aims to balance support for both creative industries and AI development. While the government frames this as a balanced approach, it acknowledges the need for practical and technical solutions to protect creators' interests. The proposal emerges as AI companies face increasing lawsuits over unlicensed IP use, while simultaneously striking deals to license certain types of content for training.

Read more


Google Updates Policy on AI Use in High-Risk Decision-Making

Google has revised its Generative AI Prohibited Use Policy to explicitly permit the use of its AI tools for automated decisions in high-risk domains, provided human supervision is maintained. The updated terms clarify that customers can deploy Google's AI in sectors like healthcare, employment, housing, insurance, and social welfare, marking a shift from what previously appeared to be a blanket prohibition.

This position contrasts with Google's major competitors' approaches. OpenAI maintains strict prohibitions on using its services for automated decisions in credit, employment, housing, and other sensitive areas, while Anthropic requires supervision by qualified professionals and mandatory disclosure of AI use in high-risk domains.

The policy update comes amid increasing regulatory scrutiny of AI-driven automated decision-making. The EU's AI Act imposes strict oversight on high-risk AI systems, while U.S. jurisdictions like Colorado and New York City have implemented specific requirements for AI transparency and bias auditing in sensitive applications. Critics and studies continue to highlight concerns about AI perpetuating historical discrimination in automated decision processes.

Read more


YouTube Launches AI Training Opt-In Feature for Content Creators

YouTube has introduced a new feature allowing creators to specifically authorize third-party companies to use their content for AI model training. Through the YouTube Studio dashboard, creators can now select from 18 major tech companies including OpenAI, Microsoft, Meta, and Apple, or choose to permit all third-party companies to train on their content.

The platform emphasizes that the default setting prohibits third-party AI training, requiring creators to actively opt in to share their content. While Google will continue training its own AI models on YouTube content under existing agreements, the new controls specifically target third-party access. This feature is available to creators with administrator access to YouTube Studio Content Manager.

This development follows creators' concerns about unauthorized use of their content for AI training, particularly after the emergence of AI video technologies like OpenAI's Sora. YouTube plans to eventually facilitate direct video downloads for authorized companies, marking this as a first step toward potential creator compensation for AI training data. The announcement coincides with Google DeepMind's release of Veo 2, its new video-generating AI model.

Read more


In this fast-paced, ever-changing world, it can be challenging to stay updated with the latest legal news in the AI and tech sectors. I hope you found this digest helpful and were able to free up some time for the real joys of life.

Don't forget to subscribe to receive the weekly digest next Friday.

Anita

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics