AI's Reliable Resources
Reliable sources. That’s what people want when they ask an online oracle for advice. Whether it’s a search engine or an AI chatbot, when you ask a computer a question, you expect the answer to be, as the label on the box implies, computed. That is, factual.
AI needs a steady diet of facts and data to give accurate answers, but as we reported in Issue 1 of this newsletter, how (or if) AI companies pay for the raw data they learn from is a contentious issue. Several news organizations have asserted AIs should not use their stories as AI training data.
The push-and-pull continues. Recently, the Financial Times struck a content licensing deal
This and other news from the intersection of AI and communications is in this month’s edition of the AI Review. Enjoy!
Note: In this issue, we begin marking stories from pay-to-read publications with a “($)” after the link.
Sign-up now to get this newsletter in your inbox every month!
1. The New York Times: Don’t Be Fooled by AI. Katy Perry Didn’t Attend the Met. ($)
While the annual Met Gala always brings with it an air of fantasy and fashion-statement bewilderment, viewers were left more bewildered than usual after photos circulated from the event’s red carpet. AI-generated images proliferated on social media, tricking watchers into thinking Katy Perry and Rihanna stood on the steps of the Metropolitan Museum of Art. While some spotted them as fakes, many were fooled. Katy Perry’s own mother texted her daughter one of these AI images.
Takeaway: Many of the largest lifestyle media outlets and brands posted live details of the event. While seemingly none of the social media managers fell victim to the AI images (or if they did, their errors were swiftly deleted), it is a good reminder to all brands to verify any images before you share
If AI can create images for your campaigns, will you ever use stock photography again? With generative AI, you can create all-new images from extremely specific prompts, instead of searching a stock library hoping for a decent match. This dilemma worries photographers who earn income from licensing their images through stock photo sites like Getty Images. The stock photo companies are even offering their own generative AI image creation services
“At one point I was getting as much as $2,000 for the use of a photo, and that went down to 2 cents,” a stock photographer said about the industry’s transition from film to digital. AI could depress it even further.
Takeaway: Generative AI could make creative expression accessible to more people and businesses than ever before by allowing the creation of highly specific images at a low cost. But how will artists make their way? And when you truly need a bona fide photographer, will you still be able to find one?
3. The Financial Times: The Financial Times and OpenAI strike content licensing deal ($)
The Financial Times and OpenAI have agreed to allow the business publication’s content to train OpenAI’s artificial intelligence products. The announcement of the agreement says the deal will create a new revenue stream for the outlet while at the same time delivering FT reporting and analysis to ChatGPT users as summaries.
Takeaway: While The New York Times’ high-profile suit against OpenAI works its way through legal negotiations, the FT’s arrangement forces a look at what happens when both parties agree to cooperate on AI’s use of reported content. Will fewer readers potentially see original FT content? Possibly. Additionally, the Financial Times could have a new incentive to ensure its content is friendly to generative AI: A change from “SEO” (search engine optimization) to “AIO” (AI optimization), perhaps.
4. Big Technology: Elon Musk’s Plan for AI News
The owner of X (formerly Twitter) and its emerging AI, Grok, wants you to chat with the news. Elon Musk’s plan for news on X is to scour social media reactions to stories (on X of course), and then present AI-generated summaries of cited and referenced stories based primarily on those reactions, not the news itself. Then you’ll be able to interact with topics via chat.
Takeaway: Using social content (which X hosts) instead of copyrighted articles to feed the AI algorithms potentially frees the product from the legal battles over copyrights that face other AI companies. But it’s an unusual way to present the news and will be a new way for people to consume it. Those looking to influence media perception on social platforms may have a challenging job of tracking social sentiment and managing social outreach – if they want to stay ahead of the Grok news-reading bots.
“With the passing of the EU AI act, the scariest thing about AI is now, unequivocally, AI regulation itself,” says AI strategist Kjell Carlsson. The EU AI Act is the world’s first major piece of legislation regarding AI development and deployment. And even though the final rules are not yet determined, companies dealing in the EU or with EU citizens will have to adjust to new transparency and risk requirements
Takeaway: Right now, it’s all about governance, documentation, transparency and guidelines. Depending on the existing practices in a company, this might mean a slight refinement or a big overhaul of documentation processes. In any case, this change will likely affect all employees and should be accompanied by a robust internal communications strategy
Recommended by LinkedIn
MIT Technology Review: OpenAI and Google are Launching Supercharged AI Assistants. Here’s How You Can Try Them Out.
The best way to understand how rapidly AI is advancing is to try the latest tools yourself. Recently, both OpenAI and Google announced new versions of their products, powered by ever-more-advanced AI models.
OpenAI’s new ChatGPT 4o (the “o” is for “omni”) responds much faster than plain old ChatGPT 4. And the speed makes all the difference. Using it is like talking or texting with a (usually but not always) smart friend whose full attention is on you. Soon to come: changes to the smartphone app that allow it to see the world in real-time, in addition to listening to it.
Google announced a slew of AI-powered products, including a competitor to ChatGPT 4o called Gemini Live, which is coming soon. It has also rolled out AI-enhanced Google search results, called AI Overviews, to all users. That will change the way everyone uses search, and likely the fundamental economics of publishing and advertising.
28% of journalists currently use generative AI for work, according to Muck Rack’s State of Journalism Report 2024. Brainstorming and research assistance
The report also reveals that nearly 60% of newsrooms have no AI use case policy (yet).
With U.S. national elections determining the balance of power this November, the concern over AI muddying the electoral waters is omnipresent within the halls of power. The main worry is the adversaries’ use of the technology to spread disinformation and affect the election’s outcome. The FBI, DHS and intelligence agencies’ alarm bells are ringing and only getting louder.
Generative AI’s rapid proliferation is a serious threat to free and fair elections domestically and around the globe. Whether it is a local race in New Delhi or who will occupy 1600 Pennsylvania Ave., protections against the creation of deceptive disinformation in the form of deepfake videos, photos and voice recordings are lacking or nonexistent. Policymakers around the globe are trying to get the necessary protections in place. But the rapid advancement of AI, by good and bad actors, far outpaces the legislative process. The efforts in Europe (EU AI Act) and the patchwork legislation passed at the state level will not go into effect until after this year’s election.
Michael Huppe is CEO of SoundExchange, a company that helps music creators and artists get paid for their work. In Forbes, he lays out why he believes it’s critical for the music industry to adopt principles that ensure the viability of creating music as a profession. His proposed framework boils down to “three Cs of music an AI: consent, credit and compensation.” It’s a conceptually solid way of looking at how creators should be part of the AI economy, not merely fodder for AI learning algorithms. Huppe’s thinking can apply to any field where creativity and AI mix together.
This month on the Allison blog, The Stream, a perspective on how we think about AI as a tool to reinforce uniquely human traits like empathy, creativity and passion. Read now: Uniquely Human, Reinforced by AI.
Are you curious to learn more? Allison has tools to assist with your AI projects. Check out Allison AI, an integrated suite of products and consulting services for our clients and agency partners. Developed by our global task force of senior counselors and technology experts, Allison AI can help enhance your company’s capability to identify and responsibly infuse AI capabilities into your workstreams.
To learn more, say hello to the Allison AI team at AI@allisonworldwide.com.
Allison’s AI Review is brought to you by contributors Brian Kaveney , Sophie Königsberger , Zac Rivera , Eva Murphy Ryan , Jacob Nahin and Rafe Needleman