The Stories, People, and Data Behind the AI Revolution

The Stories, People, and Data Behind the AI Revolution

Welcome to the first edition of the AI Review! Compiled by Allison’s team of AI experts, this monthly digest tracks the global stories, trends and people who drive today’s AI revolution. 

After all, every one of us will be caught up in the sweeping changes AI has on our professional and personal lives. It is powerful technology – one that we can all use to amplify our skills, personal connections and our unique human talents.  

So, whether you are newer to AI or a seasoned expert, join the Allison AI team of communications experts and AI counselors as we explore this new world together. 

1. The Hill: Fake Biden Robocall ‘Tip of the Iceberg’ for AI Election Misinformation  

Before the New Hampshire primary kicked off on January 23rd, a digitally generated phone message circulated that sounded like President Biden asking residents not to vote. A month after the incident, the FCC outlawed robocalls that contain voices generated by AI.  

Impact: AI robocalls are just one way AI is used in the U.S. presidential election, and we are still in the early stages. AI will continue to impact how the electorate is educated and marketed to. The Biden robocall case highlighted the immediate need for updated legislation to deal with deepfake threats.  

The Takeaway: While the U.S. government may be slow to adopt some rules and regulations, businesses can act more quickly to protect their brand and goodwill by creating evolving codes of conduct for AI use and enacting them rapidly.  

2. The Verge: AI Copyright Lawsuits Could Make the Whole AI Industry Go Extinct

In December 2023, The New York Times sued OpenAI, maker of ChatGPT, claiming copyright infringement. The Times said ChatGPT was trained, in part, on its stories. OpenAI said the lawsuit misrepresents facts and the media outlet “tricked” ChatGPT to get incriminating results.  

Impact: This suit, and others like it, is “a potential extinction-level event for the modern AI industry,” according to The Verge podcast. The stakes are also high for media companies. Large Language Models (LLMs) are trained on human-written text. The more useful LLM AIs become, the less people may go to the stories that AIs pull their knowledge from. AI thus threatens content creators, as well as aggregators like search engines. 

Despite the ongoing legal issue, OpenAI still hopes it can come to an agreement with the NYT, like it did with Politico and Business Insider. 

3. In the Black: Generative AI in business: How to Navigate the Ethics

There’s a fine line between leveraging the benefits of generative AI and realistic skepticism about what it means for business. In an important interview, Simon Longstaff, executive director of The Ethics Centre, and Professor Matt Kuperholz, data scientist with Deakin University’s Centre for AI and the Future of Business, discuss the big questions to ask when transforming your business with generative AI.  

The Takeaway: Questions of governance and guidelines have to be solved quickly so AI can develop apace. As Kuperholz put it in the interview: “We don’t put brakes into a race car so we can go slowly. We put brakes into a race car so we can go really quickly – safely.” 

4. IDC: The impact of Generative AI on the European Future of Work

In Europe, 20% of organizations are already heavily invested in generative AI. Another 58% are looking into it. Workers will see big changes: Many companies hope to mitigate labor shortages and raise efficiency by using generative AI to augment worker skills.  

What we’re watching: 2024 will show which AI use cases lead to improved productivity and more innovation – and which ones do not. Today’s workers (and managers) will face great challenges of re-skilling jobs and re-thinking careers.

German-speaking readers interested in this topic might also find this analysis by Handelsblatt interesting.  

5. The Wall Street Journal: AI Has a Trust Problem. Can Blockchain Help?

Thanks to cryptocurrency, blockchain technology has a bit of a black eye. But the blockchain and its “immutable ledger” of transactions is foundational tech and could be used to address one of AI’s most vexing issues: Trust.  

FICO’s Chief Analytics Officer Scott Zoldi is on the record on how the company uses the blockchain to track the process of building and training AI algorithms. It is a timely and urgent task as regulators put pressure on businesses to build trust into their models.  

Impact: It’s critical for companies to illustrate how they move the AI trust discussion forward. Using the blockchain for AI governance offers a nifty “against the grain” use case; stories like this highlight an issue every business should have a message on.  

55 - The percentage of workers who have used unapproved generative AI tools at work, according to a recent Salesforce study of 14,000 workers across 14 countries. 40% of respondents even admitted to using generative AI tools that are banned by employers.  

The study, which will have IT departments reaching for their heartburn tablets, revealed, “not only do workplace users tap into unapproved generative AI tools at work, they do so while still recognizing that the ethical and safe use of generative AI means adopting company-approved programs.” 

Whose fault is this? According to the study, 7 in 10 employees have never completed or received training on how to use generative AI safely and ethically.  

Action item: It’s not enough to just write policies. Businesses need ongoing AI training programs… stat. 

Read more from Salesforce: ‘The Promises and Pitfalls of AI at Work.’ (November 15, 2023)  

The EU AI Act: What it Means for Europe and the World  

In December 2023, the European Union (EU) adopted the Artificial Intelligence Act (AIA), putting the EU at the forefront of standard-setting and regulating for AI usage. The AIA regulates Large Language Models (LLMs) to gain more transparency about training data. And it prohibits high-risk applications like large-scale biometric identification in public places, applications that could manipulate or otherwise harm users, and applications that could limit people’s liberty based on a social behaviour or other personal criteria. 

That’s one way to regulate. AIA uses a risk-based approach to regulate high-risk applications, rather than banning whole technologies. Still, the act will impact all providers of AI systems offered in the EU and will have far-reaching consequences globally. A groundswell of criticism has risen about the competitive disadvantages for local and regional companies while the AIA framers enter the next round of discussions to finalize technical details. AIA regulations could take effect within two years. 

Taylor Swift vs. The Fakes. The influence of Taylor Swift knows no bounds. While the industry and legislators deabted the challenge deepfakes, it was only when Taylor Swift herself became a victim of fake AI-generated images that the powerful leapt into action. The White House called the trend “alarming,” and companies, such as Open AI and Microsoft, rushed to strengthen anti-deepfake safety systems.

In January, AI-generated X-rated images of Swift were seen more than 45 million times on X (formerly Twitter). The platform went so far as to block all searches of her name on its platform, calling it “a temporary action done with abundance of caution.” Less than two weeks later, European Union negotiators struck a deal on a bill that would criminalize the sharing of such content across the EU by the middle of 2027. Now that was Swift.  

There are literally hundreds of AI events scheduled this year. Our Washington, D.C.-based Public Policy team is especially interested in these upcoming meetings: 

February 29: The Bridge: The Future of AI Policy 

With the AI policy landscape largely uncharted, we’re keen to hear directly from policymakers on possible paths forward. At this event, Punchbowl News founder and CEO Anna Palmer and senior congressional reporter Andrew Desiderio will sit down with Sens. Mark Warner (D-Va.) and Todd Young (R-Ind.) to discuss news of the day and AI policy. Location: Johns Hopkins University Bloomberg Center, 555 Pennsylvania Avenue NW, Washington, D.C.  

March 4 – March 6: Artificial Intelligence Masters 2024  

With AI and machine learning (ML) touching every aspect of our lives and work, it is vital to understand the technical and legal areas related to these fields. This conference features discussions with top thought leaders in the industry who will discuss specific technical and legal areas relating to AI and ML. Location: 20098 Ashbrook Place, Suite 100, Ashburn, VA 20147 

Allison AI is an integrated suite of products and consulting services for clients and agency partners looking to leverage AI securely and responsibly for their business needs.

Allison AI’s external and internal services currently span three categories: Advisory and Consulting, Training, and Products. These services include end-to-end solutions from our AI Task Force and Policy Development team to a Suite of AI Products built on a secure and private system.  

If you have any questions or would like to learn more, say hello to the Allison AI team at AI@allison.worldwide.com. Sign up HERE to receive our updates.  

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics