🚨AI&Tech Legal Digest - September 6, 2024
AI&Tech Legal Digest by Anita Yaryna

🚨AI&Tech Legal Digest - September 6, 2024

Hi there! 👋 Thanks for stopping by. Welcome to the AI&Tech Legal Digest 🙌

Here, I bring you the latest key legal news and trends in the AI and tech world. Stay updated with expert insights, state approaches, regulations, key recommendations, safeguards, and practical advice in the tech and AI field by clicking the "subscribe" button above ✅ Enjoy your read!        

Historic AI Treaty Set to Be Signed by US, UK, and EU Nations

The Council of Europe has announced that the first legally binding international AI treaty will be open for signing on Thursday. This landmark agreement, known as the AI Convention, will bring together European Union members, the United States, and Britain in a collective effort to address AI risks while promoting responsible innovation. Adopted in May after negotiations involving 57 countries, the treaty focuses on protecting human rights in the context of AI systems.

While distinct from the EU's recently enacted AI Act, this convention represents a significant step towards global AI governance. British Justice Minister Shabana Mahmood emphasized its importance in safeguarding fundamental values like human rights and the rule of law in the face of advancing technologies. The treaty allows signatories flexibility in implementing its provisions through legislative, administrative, or other measures.

However, some experts, including Francesca Fanucci from the European Center for Not-for-Profit Law, have raised concerns about the treaty's broad language and potential enforceability issues. As the AI landscape continues to evolve rapidly, this convention marks a crucial starting point for international cooperation on AI regulation, though its practical impact remains to be seen.

Read more


X Secures Partial Victory in California Content Moderation Law Challenge

Elon Musk's social media platform X has won a significant appeal against California's content moderation transparency law. A three-judge panel of the 9th U.S. Circuit Court of Appeals in San Francisco overturned a lower court's decision, partially blocking the enforcement of a law requiring social media companies to publicly disclose their policies on combating disinformation, harassment, hate speech, and extremism.

The appeals court ruled that the law's requirements were "more extensive than necessary" to achieve the state's goal of transparency in content moderation practices. This decision marks a crucial development in the ongoing debate over states' authority to regulate social media companies and their content moderation policies. The case now returns to the lower court to determine if the content moderation provisions can be separated from other aspects of the law.

This ruling highlights the complex legal landscape surrounding social media regulation and free speech concerns, echoing similar challenges faced by content moderation laws in Texas and Florida. As tech companies and lawmakers continue to grapple with these issues, this case may set an important precedent for future legislation aimed at increasing transparency in social media operations.

Read more


Google's Ad Tech Empire Faces Antitrust Scrutiny in Landmark Trial

A pivotal antitrust trial targeting Google's digital advertising business is set to begin next week, marking the second major legal challenge to the tech giant's market dominance. The U.S. Department of Justice, along with a coalition of states, will argue that Google has illegally monopolized the digital ad market, potentially harming news publishers and advertisers alike.

This case, part of the Biden administration's broader efforts to curb Big Tech's power, follows a recent victory for the DOJ in a separate lawsuit concerning Google's search engine monopoly. The trial will scrutinize Google's less visible but highly lucrative ad technology tools, which contributed to over 75% of the company's $307.4 billion revenue last year.

At stake is the potential breakup of Google's ad tech business, with prosecutors alleging the company controls up to 91% of certain ad tech markets. The trial will also highlight the impact on journalism, with testimony expected from major news organizations. As the digital advertising landscape continues to evolve, this case could redefine the rules of engagement for tech giants in the advertising ecosystem.

Read more


Irish Data Regulator Resolves Dispute with X Over AI Data Usage

Ireland's Data Protection Commission (DPC) has announced the termination of legal proceedings against social media platform X, formerly known as Twitter, following a significant agreement on data usage. The dispute, which centered on the platform's use of European Union users' personal data for AI training purposes, has reached a resolution that reinforces data protection principles in the AI development landscape.

X has committed to permanently limiting its use of personal data collected from EU users for AI training and refinement. This agreement effectively extends the temporary measures put in place in August, when the DPC initially sought to restrict X's data processing activities. The resolution marks a crucial development in the ongoing dialogue between tech companies and privacy regulators, particularly in the context of AI development and data protection in the European Union.

This outcome not only highlights the increasing scrutiny of AI training practices but also sets a precedent for how social media platforms may need to approach data usage in AI development moving forward, balancing innovation with stringent EU data protection standards.

Read more


EU Watchdog Slaps Clearview AI with $33M Fine for Illegal Image Scraping

Clearview AI, a facial recognition service provider, has been hit with a hefty $33 million fine by the Dutch Data Protection Authority (DPA) for allegedly building an illegal database of 30 billion images scraped from the internet without user consent. This ruling underscores the growing tension between AI-powered facial recognition technologies and stringent European privacy regulations.

The DPA asserts that Clearview's practices violate GDPR rules, which are designed to protect EU citizens' privacy. Despite Clearview's claims of aiding law enforcement and benefiting society, Dutch regulators argue that the company's methods are highly intrusive and unlawful. Clearview, however, contends that it operates outside EU jurisdiction and deems the decision unenforceable.

This case highlights the global challenges in regulating AI and data privacy, potentially setting a precedent for holding tech company executives personally liable for GDPR violations. As AI development races forward, this ruling signals intensifying scrutiny of data practices and could impact how tech companies, including social media giants, approach AI development and data usage in relation to EU regulations.

Read more


Stability AI Unveils Groundbreaking Stable Diffusion Model with Ethical Focus

Stability AI has launched Stable Diffusion, a cutting-edge text-to-image AI model, marking a significant milestone in ethical AI development. This public release, following an initial rollout to researchers, incorporates crucial improvements based on beta testing and community feedback, aiming to balance innovation with responsible AI practices.

The model, released under a Creative ML OpenRAIL-M license, allows for both commercial and non-commercial use while emphasizing user responsibility for ethical and legal application. Notably, Stability AI has implemented an AI-based Safety Classifier to filter potentially undesired outputs, with adjustable parameters open to community input.

Stable Diffusion's launch highlights the evolving landscape of AI ethics and accessibility in image generation. As Stability AI plans to release optimized versions and variants, this development could significantly impact the future of creative AI applications, setting a new standard for balancing technological advancement with ethical considerations in the AI industry.

Read more


X Agrees to Halt EU Data Usage for Grok AI Training

Elon Musk's X Corp. has conceded to European Union regulators' demands, agreeing to stop processing personal information from EU users to train its AI chatbot, Grok. This development marks a significant victory for data protection in the evolving landscape of AI development and user privacy.

Ireland's Data Protection Commission (DPC) announced that X has committed to permanently deleting EU users' personal data collected from public posts between May 7 and August 1, 2024. This resolution follows the DPC's unprecedented move in August to file a court request to halt X's data processing for AI training, citing potential violations of EU data protection laws.

The case highlights the growing tension between rapid AI advancement and stringent EU privacy regulations. As the first action of its kind by a lead EU agency against an online platform, it sets a precedent for future regulatory approaches to AI development using public data. The DPC's call for an EU-wide discussion on balancing data protection with AI model training underscores the need for consistent, proactive regulation in this rapidly evolving field.

Read more


YouTube Unveils AI Detection Tools to Safeguard Creator Content

YouTube has announced the development of groundbreaking AI detection tools aimed at protecting creators' content and likeness from unauthorized AI-generated replications. This significant move expands YouTube's existing Content ID system to include new synthetic-singing identification technology, capable of detecting AI-simulated voices in music. Additionally, the platform is working on tools to identify AI-generated facial simulations, addressing growing concerns about deepfakes and misrepresentation.

In a crucial step towards addressing AI model training issues, YouTube is also exploring ways to give creators more control over how their content is used for AI training on the platform. This initiative responds to longstanding concerns from creators about unauthorized use of their material by major tech companies for AI development.

While specific details about creator compensation for AI-generated content remain undisclosed, YouTube's ongoing collaboration with Universal Music Group signals progress in this area. The platform plans to pilot its expanded Content ID system for synthetic music detection early next year, marking a significant step in balancing AI innovation with creator rights protection in the digital content ecosystem.

Read more


In this fast-paced, ever-changing world, it can be challenging to stay updated with the latest legal news in the AI and tech sectors. I hope you found this digest helpful and were able to free up some time for the real joys of life.

Don't forget to subscribe to receive the weekly digest next Friday.

Anita



To view or add a comment, sign in

Explore topics