Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
The first AI copyright win is here — but it’s limited in scope
A federal district court judge in Delaware issued the first major ruling on whether using copyrighted materials to train artificial intelligence systems constitutes copyright infringement.
On Feb. 11, Judge Stephanos Bibas granted a summary judgment to Thomson Reuters, which makes the legal research service Westlaw, against a company named Ross Intelligence. The judge found that Ross infringed on Reuters’ copyrights by using Westlaw headnotes — essentially case summaries — to train its own legal research AI.
There are fair uses of copyrighted materials under federal law, but they generally need to be “transformative” in nature. Evidently, Ross flew too close to the sun in using Westlaw summaries to train what could be a commercial rival.
For Matthew Sag, an Emory University law professor who studies artificial intelligence and machine learning, this was somewhat of a surprise. Sag criticized the lack of explanation in the judge’s ruling and said he believed the reach would only be limited.
“It seems to me that the most important factor in the court’s decision was the fact that the defendant trained its model on Westlaw’s data in order to produce something that would perform almost exactly the same function as Westlaw,” Sag said. “That makes it quite different to a generative AI model trained on half of the Internet that has a broad range of capabilities and is not designed to specifically replace any one input or any one source of inputs.”
Robert Brauneis, a law professor and co-director of the Intellectual Property Program at the George Washington University Law School, said the judge’s ruling weakens the case for that argument for generative AI developers, who could be seen as competing directly with the artists they’re allegedly copying. “Generative AI developers are using the copyrighted works of writers, artists, and musicians to build a service that would compete with those artists, writers, and musicians,” he said.
Major litigation over generative AI is still working its way through the courts — notably in the New York Times Company’s lawsuit against OpenAI, and class action suits by artists and writers against nearly every major AI firm. “These cases are still relatively early, and there is a lot of civil procedure to get through — fights over discovery, class action certification, venue — before we get to interesting questions of copyright law,” Sag said. “We are still a long way from a definitive judicial resolution of the basic copyright issue.”
An image of a firefly from Adobe Firefly.
Adobe’s Firefly is impressive and promises it’s copyright-safe
A floppy-eared, brown-eyed beagle turns her head. A sunbeam shines through the driver’s side window. The dog is outfitted in the finest wide-brimmed sun hat, which fits perfectly atop her little head.
If this hat-wearing dog weren’t a clue, I’m describing an AI video. There are other hints too: If you look closely, the dog is sitting snugly between two black-leather seats, which are way too close together. Outside, cornfields and mountains start to blur, and the road contorts behind the car.
Despite these problems, this is still one of the better text-to-video generation models I’ve encountered. And it’s not from a major AI startup, but rather from Adobe, the company behind PhotoShop.
Adobe first released its AI model, Firefly, for image generation in March 2023 and followed it up this month with a video generator, which is still in beta. (You can try out the program for free, but we paid $10 after quickly hitting a limit of how many videos we could generate.)
Firefly’s selling point isn’t just that it makes high-quality video clips or that it integrates with the rest of the Adobe Creative Cloud. Adobe also promises that its AI tools are all extremely copyright-safe. “As part of Adobe’s effort to design Firefly to be commercially safe, we are training our initial commercial Firefly model on Adobe Stock images, openly licensed content, and public domain content where copyright has expired,” the company writes on its website.
In the past, Adobe has also offered to pay the legal bills of any enterprise user of Firefly’s image model that is sued for copyright violations — “as a proof point that we stand behind the commercial safety and readiness of these features,” Adobe’s Claude Alexandre said in 2023. (It’s unclear if any users have taken the company up on the offer.)
eMarketer’s Gadjo Sevilla said that Adobe has a clear selling point amid a fresh crop of video tools such as OpenAI, ByteDance, and Luma: its copyright promises. “Major brands like Dentsu, Gatorade, and Stagwell are already testing Firefly, signaling wider enterprise adoption,” Sevilla said. “Making IP-safe AI available in industry-standard tools can help Firefly, and by extension Adobe, gain widespread adoption in copyright-friendly AI image generation.”
But Adobe’s track record isn’t spotless. The company had a mea culpa last year after AI images from rival Midjourney were found in Firefly’s training set, according to Bloomberg, likely submitted to the Adobe Stock program and slid past content moderation guardrails.
Firefly’s video model is still new, so public testing will bear out how well it’s received and what exactly users get it to spit out. For our trial, we asked for “an extreme close-up of a flower” and selected settings for an aerial shot and an extreme close-up.
We also asked Firefly to show us President Donald Trump waving to a crowd. It wouldn’t show us Trump because of content rules around politics but gave us some other guy.
And, of course, we asked to see if Mickey Mouse — who is at least partly in the public domain — could ride a bicycle. At least on that front, it’s copyright-safe. You’re welcome, Disney.
When compared to OpenAI’s Sora video generator, Firefly takes longer (about 30 seconds vs. 15 for Sora) and is not quite as polished. But if I get into trouble using Adobe’s products, well, at least a quick call to their general counsel’s office should solve my problems.
Security cameras representing surveillance.
OpenAI digs up a Chinese surveillance tool
On Friday, OpenAI announced that it had uncovered a Chinese AI surveillance tool. The tool, which OpenAI called Peer Review, was developed to gather real-time data on anti-Chinese posts on social media.
The program wasn’t built on OpenAI software, but rather on Meta’s open-source Llama model; but OpenAI discovered it because the developers used the company’s tools to “debug” code, which tripped its sensors.
OpenAI also found another project, nicknamed Sponsored Discontent, that used OpenAI tech to generate English-language social media posts that criticized Chinese dissidents. This group was also translating its messages into Spanish and distributing them across social media platforms targeting people in Latin America with messages critical of the United States. Lastly, OpenAI’s research team said it found a Cambodian “pig butchering” operation, a type of romance scam targeting vulnerable men and getting them to invest significant amounts of money in various schemes.
With the federal government instituting cuts on AI safety, law enforcement, and national security efforts, the onus for discovering such AI scams and operations will increasingly fall to private companies like OpenAI to self-regulate but also self-report what it finds.
Then Republican presidential candidate Donald Trump gestures and declares "You're fired!" at a rally in New Hampshire in 2015.
Trump plans firings at NIST, tasked with overseeing AI
Sweeping cuts are expected to come to the US National Institute of Standards and Technology, or NIST, the federal lab housed within the Department of Commerce. NIST oversees, among other things, chips and artificial intelligence technology. The Trump administration is reportedly preparing to terminate as many as 500 of NIST’s probationary employees.
It’s unclear when the firings will hit, but it’s been mere weeks since Trump repealed Biden’s 2023 sweeping executive order on AI. In that order, the Biden administration had entrusted NIST with managing semiconductor manufacturing funds and establishing safety standards for AI development and use.
It also oversees the US Artificial Intelligence Safety Institute, the initiative in charge of testing advanced AI systems for safety and security, as well as setting standards for the safe development of AI. Since the institute is still nascent — established in 2023 — it could be especially vulnerable to across-the-board cuts to probationary staff.
Capitol Hill, Washington, D.C.
Silicon Valley and Washington push back against Europe
That display came after Meta and Google publicly criticized Europe’s new code of practice for general AI models, part of the EU’s AI Act earlier this month. Meta’s Joel Kaplan said that the rules impose “unworkable and technically infeasible requirements” on developers, while Google’s Kent Walker called them a “step in the wrong direction.”
On Feb. 11, US Vice President JD Vance told attendees at the AI Action Summit in Paris, France, that Europe should pursue regulations that don’t “strangle” the AI industry.
The overseas criticism from Washington and Silicon Valley may be having an impact. The European Commission recently withdrew its planned AI Liability Directive, designed to make tech companies pay for the harm caused by their AI systems. European official Henna Virkkunen said that the Commission is softening its rules not because of pressure from US officials, but rather to spur innovation and investment in Europe.
But these days, Washington and Silicon Valley are often speaking with the same voice.
Assorted medication tables and capsules
Hard Numbers: Google’s superbug breakthrough, End-to-end solutions, OpenAI’s big figure, Humane sells, Apple builds in Texas, AI bankers?
3.3 billion: A startup called Together AI, which gives people access to AI computing, raised $305 million in a new funding round led by General Catalyst and Saudi Arabia’s Prosperity7 Ventures on Thursday — now it’s valued at $3.3 billion. The company bills itself as an “end-to-end” AI provider, allowing users to access open-source AI models and computing power from data centers.
400 million: OpenAI disclosed Thursday it has 400 million weekly active users as of February, according to COO Brad Lightcap. That’s up a whopping 33% from 300 million users in December. It’s unclear what has caused this jump, though the ChatGPT maker has added more government, academic, and enterprise clients to its roster in recent months.
116 million: The ill-fated AI wearable company Humane, which made an AI-powered pin, shut down and sold its assets to HP for $116 million last Tuesday. The much-hyped pins cost $699 but were panned by critics.
500 billion: Apple said Monday that it will invest $500 billion in the US over the next five years, including on an AI server factory in Texas. The iPhone maker will open a 250,000-square-foot facility in Houston, scheduled to open in 2026, to support its Apple Intelligence AI platform.
4,000: DBS, the largest bank in Singapore, said Tuesday that it expects to cut 4,000 jobs over the next three years as artificial intelligence gets more powerful. The bank clarified that it would affect temporary and contract workers but didn’t say what roles would be replaced by AI.France puts the AI in laissez-faire
France positioned itself as a global leader in artificial intelligence at last week’s AI Action Summit in Paris, but the gathering revealed a country more focused on attracting investment than leading Europe's approach to artificial intelligence regulation.
The summit, which drew global leaders and technology executives from around the world on Feb. 10-11, showcased France’s shift away from Europe’s traditionally strict tech regulation. French President Emmanuel Macron announced $113 billion in domestic AI investment while calling for simpler rules and faster development — a stark contrast to the EU’s landmark AI Act, which is gradually taking effect across the continent.
Esprit d’innovation
This pivot toward a business-friendly approach has been building since late 2023, when France tried unsuccessfully to water down provisions in the EU’s AI Act to help domestic firms like Mistral AI, the $6 billion Paris-based startup behind the chatbot Le Chat.
“France sees an opportunity to improve its sluggish economy via the development and promotion of domestic AI services and products,” said Mark Scott, senior fellow at the Atlantic Council’s Digital Forensic Research Lab. “Where France does stand apart from others is its lip service to the need for some AI rules, but only in ways that, inevitably, support French companies to compete on the global stage.”
Nuclear power play
France does have unique advantages in its AI: plentiful nuclear power, tons of foreign investment, and established research centers from Silicon Valley tech giants Alphabet and Meta. The country plans to dedicate up to 10 gigawatts of nuclear power to a domestic AI computing facility by 2030 and struck deals this month with both the United Arab Emirates and the Canadian energy company Brookfield.
About 70% of France’s electricity comes from nuclear — a clean energy source that’s become critical to the long-term vision of AI companies like Amazon, Google, and Microsoft.
France vs. the EU
But critics say France’s self-promotion undermines broader European efforts. “While the previous European Commission focused on oversight and regulation, the new cohort appears to follow an entirely different strategy,” said Mia Hoffman, a research fellow at Georgetown University’s Center for Security and Emerging Technology. She warned that EU leaders under the second Ursula von der Leyen-led Commission, which began in September 2024, are “buying into the regulation vs. innovation narrative that dominates technology policy debates in the US.”
The summit itself reflected these tensions. “It looked more like a self-promotion campaign by France to attract talent, infrastructure, and investments, rather than a high-level international summit,” said Jessica Galissaire of the French think tank Renaissance Numérique. She argued that AI leadership “should be an objective for the EU and not member states taken individually.”
This France-first approach marks a significant departure from a more united European tech policy, suggesting France may be more interested in competing with the US and China as a player on the world stage than in strengthening Europe’s collective position in AI development.
Hard Numbers: AI-generated bank runs, Europe wants to supercharge innovation, Do you trust AI?, Dell’s big deal, South Korea’s GPU hoard
51.6 billion: Europe will invest $51.6 billion in artificial intelligence, European Commission President Ursula von der Leyen said last week. That’ll add to the $157 billion already committed by Europe’s private sector under the AI Champions Initiative launched at the AI Action Summit in Paris last week. The goal is to “supercharge” innovation across the continent, she said.
32: Just 32% of Americans say they trust artificial intelligence, according to the annual Edelman Trust Barometer published by the public relations firm Edelman on Thursday. By contrast, 72% of people in China said they trust AI. Meanwhile, only 44% of Americans said they are comfortable with businesses using AI.
5 billion: Dell shares rose 4% on Friday after press reports indicated it was closing a $5 billion deal to sell AI servers to Elon Musk’s xAI. Dell stock has soared 39% over the past year on increased demand for AI.
10,000: South Korea said Monday it will buy 10,000 graphics processors for its national computing center. The country is one of the few that are unrestricted from buying these chips from American companies. It’s unclear who South Korea will buy from, but Nvidia dominates the market, followed far behind by AMD and Intel.