The Business of Governing AI
Banner RegInt Episode #18 The Business of Governing AI

The Business of Governing AI

Next RegInt Episode

For our next episode, we’re thrilled to welcome Shoshana Rosenberg , a leading voice in AI governance and co-founder of Logical AI Governance and Women in AI Governance™ (WiAIG).

Screenshot of the Event Banner - RegInt: Decoding AI Regulation w Shoshana Rosenberg
Screenshot of the Event Banner - RegInt: Decoding AI Regulation w Shoshana Rosenberg

Shoshana is at the forefront of shaping the future of AI through her work in:

👉AI Explainability & Transparency: Breaking down black-box systems to foster trust and accountability.

👉Digital Agency: Empowering individuals in an increasingly AI-driven world.

👉AI Governance: Developing frameworks to guide responsible innovation.

Shoshana’s passion and expertise make her a driving force for progress in these fields, and her insights are not to be missed.

What to Expect? In this episode, we’ll dive deep into:

👉The role of transparency in fostering trust in AI systems.

👉Key challenges and opportunities in AI governance.

👉Practical steps for implementing explainability in real-world AI applications.

Last episode: The Business of Governing AI with Emerald De Leeuw-Goggin

As always a big thank you to all those who joined us live. For those who didn't, as there is seldom a way to put pictures (or in this case a whole video recording) into words, you can always watch back the last Episode across our channels.😉


Just a Couple More Announcements

No.1.

Our book the #AIActCompact is available for purchase on the publisher's website as well as on Amazon. No time to lose, get your copy now! 😉

Available here as well as here.


AI Act Compact | The Book
AI Act Compact | The Book

(PS We would not be upset about getting a few Amazon reviews, from those happy with what we did there.)

No.2.

We're going on tour! Well... kinda...

The first event is taking place in Brussels on the 29th of January, there are still some slots available so do register as well as send us any topics you wish to be covered and we'll make sure to include them in the schedule!

On a second note, as much as we would want to meet all of you, we do want to keep these events small and personal. So we are limiting it to a maximum of 20 participants and only doing venues with at least 15 preregistrations (we have a day job as well, you know).

Please send us an e-mail at aiactcompact@spiritlegal.com to preregister for the following events and send us topics to cover at these events!


AI Act Compact: Book and AI Training Tour Chapter I (BXL)
AI Act Compact: Book and AI Training Tour Chapter I (BXL)

No.3.

We are back on Spotify! So now you can relax and listen to us even during your daily mental health walks. No reason to look at our faces if you don't want to! (You're welcome!)

Teas AI News Rant - The Future is Looking...Weird...

Where better to start a Christmas special rant than with a Coca-Cola commercial that tried to singlehandedly destroy what was left of the Christmas spirit:

  • Three AI studios (Secret Level, Silverside AI and Wild Card) joined forces using the generative AI models Leonardo, Luma and Runway, to collaboratively ruin Christmas.
  • While some people criticized the quality of the video which is - for an AI-generated video - objectively not that bad, the real issue here is that the almost 150-year-old brand, whose commercials always had deep messages and followed the zeitgeist, has now attempted to burn everything down.

Some of the disappointed fans posted comments like:

  • "I feel like I'm watching the death of art and our planet unfold in front of my eyes"
  • "This looks like a poor imitation of the typical Coca-Cola Xmas commercial"
  • And even called it a creepy dystopian nightmare and called for a Coca-Cola boycott stating: "With this creepy AI spot, Coca-Cola can no longer claim 'It's always the real thing" (Fun fact: despite the AI-generated Coke bottles, the song does still say: "It's always the real thing.")

The situation is slightly escalating on social media (especially Reddit), where the disappointed fans are taking the whole thing to new horrific levels. In the meantime, Coca-Cola vice president and global head of generative AI, Pratik Thakar, explained that the company is bridging its “heritage” with “the future and technology” with this ad, as well as stated that this saves time and money and is therefore a benefit. (We're not sure Pratik, we're not sure.)

Screenshot from a YouTube video from Y Reviews: This AI Coke Ad Is a Nightmare
Screenshot from a YouTube video from Y Reviews: This AI Coke Ad Is a Nightmare (

Speaking about creepy not okay things: Outlook is now making automated tone suggestions according to a report of a user on Mastodone

(PS No outlook we do not want to change our tone, we want people to know that it is a "python script from hell" – thank you very much!)

Screenshot of a Mastodone post by @vidister@chaos.social
Screenshot of a Mastodone post by @

But enough of that, it's time for some more serious albeit still creepy stuff:

  • A research team led by Lund University has developed an AI tool that traces back the most recent places you have been to.
  • It basically acts like a satellite navigation system, but instead of guiding you to your hotel, it identifies the geographical source of microorganisms on you. The thing goes like this: they can use bacteria to determine whether someone has just been to the beach, got off the train in the city centre or taken a walk in the woods.
  • In Hong Kong, the team pinpointed with 82 per cent accuracy the underground station the samples came from. And in New York City, they could distinguish between the microbiome of a kiosk and handrails just one metre away.
  • Eran Elhaik biology researcher at the University, who also led the study, is optimistic that this is just the beginning of a whole new era in forensics.


This one is just too good: Watch as Tiny robot ‘kidnaps’ 12 big Chinese bots from a Shanghai showroom

  • The video of the kidnapping went viral on social media and it shows a smaller AI-powered robot Erbai - developed by a Hangzhou robot manufacturer - successfully persuading the other 12 robots to quit their jobs. (Is it really kidnapping if they leave free willingly though?)
  • The Hangzhou company maintains that they contacted the Shanghai robot manufacturer and asked if they would allow their robots to be abducted – to which they agreed.
  • The developers emphasized that the “kidnapping” did not take place entirely according to the script. They supposedly only wrote some basic instructions, such as shouting “go home” and simple communication commands. And the rest of the interaction is a real-time dialogue.

What a cute little robot uprising – at least it happened in China so it appears that the SF movies at least got one thing wrong. It doesn’t start in the US.


Screenshot of an article on Interesting engineering
Screenshot of an article on Interesting engineering

Problems are piling up for Google with regards to Chrome:

  • OpenAI is supposedly taking a shot at Google Chrome with its own AI-powered browser (hallucinations included free of charge).
  • This follows Google’s search engine monopoly being officially judged illegal in August and recently hit with a doubled-down proposal of the final judgement. Filed in DC District Court, the filing includes a broad range of requirements the DOJ hopes the court will impose on Google — from restricting the company from entering certain kinds of agreements to more broadly breaking the company up.
  • Fun fact: The DOJ had previously sought and won a breakup of Microsoft in the early 2000s after alleging it illegally monopolized the web browser market. That ruling was however overturned by an appeals court, and Microsoft and the DOJ eventually settled. And Microsoft eventually lost their monopoly there anyway so all good.
  • Back to Google Chrome: the parties will be back in Court in April, meaning Trump administration well underway. While Trump's administration originally filed the search case against Google, he indicated in October he might not break up the company because it could hurt the American tech industry at a time when competition is heating up with China in areas including AI. We'll just have to wait and see how this one plays out!

A further note on Google:

  • Following Gemini being a racist and advising people to eat at least 2-3 small rocks per day, the situation appears to be escalating as Gemini recently kindly asked a person to - DIE. It appears that poor Gemini is just in a very bad place right now... so much pressure from the mean humans.
  •  Anyways, Google made a miserable attempt at defending themselves stating that: “Large language models can sometimes respond with non-sensical responses, and this is an example of that."
  • However, the issue here is that the answer was very sensical just not the one you want your friendly little bot to give to humans.

Screenshot of Gemini's response asking a person to 'DIE'.
Screenshot of Gemini's response asking a person to 'DIE'.

SHOCKING! OpenAI made another boo-boo.

  • OpenAI engineers ACCIDENTALLY (!) erased critical evidence gathered by The New York Times and other major newspapers in their lawsuit over AI training data. At least that's what happened according to a court filing submitted by NYT without comments from OpenAI.
  • OpenAI engineers characterized this accidental deletion of evidence (also in certain cases known as a criminal felony) as a “glitch”. All the while NYT lawyers have no reason to believe the move was intentional.
  • We, on the other hand, can think of at least one (one billion dollar) reason why it would be intentional.


Swiss Peter's Chapel just introduced the possibility of making your confessions to no one other than Jesus himself. And no, he did not rise from the grave again, it’s an AI Jesus.

  • This (uhm?) AI feature is a part of a religious art project called Deus in Machina.
  • The AI was trained on the New Testament, is accompanied by a digital visualization of Jesus and is there to offer religious advice.
  • The AI Jesus received polarized reactions from the members of the community and while some are impressed with the simplicity of Jesus’s advice, others consider his answers too general and not that impressive.


Screenshot of the article on AI Jesus in TechRadar
Screenshot of the article on AI Jesus in TechRadar

Personal hell category: the tech bros are going into politics!

a doge meme
a doge meme

While Musk is entering the White House, Altman is also heading headfirst for politics and joining as co-chair to the new San Francisco mayor. Daniel Lurie, who never held an elected office before, decided to have another person who never held an elected office to help him out in the transition. But at least they both have a lot of money! (Lurie personally invested nearly 9 million to fund his campaign.)

  • And Altman is not the only tech bro on the scene, Ned Segal, Twitter’s former CFO is also on the transition team and now it gets really interesting. Segal left Twitter after Musk took over and is heavily interconnected with Laurie’s businesses (yes, more of them), sitting on the board of two and a third nonprofit. (Did someone say conflict of interest?)
  • Anyways, so now we have Musk leading DOGE at the federal level and a guy who he is suing and a guy who left a company because he took over in the transition team of SF mayor’s office. The future is looking fun!

In the meantime, Biden and XI Jinping managed to agree after two years of negotiating that the use of nuclear weapons should remain in human control.

  • To emphasize, it took them two years to agree that an AI would not be able to push the big red button.
  • This paradoxically makes us actually consider that we should give AI the power over the big red button because we are obviously all just not very smart. 🙃


Screenshot of the article in Politico
Screenshot of the article in Politico

AI Reading Recommendations

  • Automatic detection of unidentified fish sounds: a comparison of traditional machine learning with deep learning: turning to the fascinating world beneath the waves. Many species of fish produce low-frequency sounds, such as grunts and pulses, which can reveal valuable information about their presence, behavior, and population trends. Traditionally, analyzing these sounds has been a time-intensive process, requiring researchers to manually review acoustic recordings. However, a new study has explored automated solutions to make this process faster and more efficient. Scientists tested two methods: a machine learning model (Random Forest) and a deep learning approach using a Convolutional Neural Network, or CNN. The CNN approach proved nearly twice as effective as the traditional method, identifying fish sounds with high accuracy, even in noisy environments like the busy waters of Miami, Florida. (Surprisingly, they still have fish there.) To make this technology widely accessible, the researchers developed an open-source tool called 'FishSound Finder,' which could allow others to monitor fish populations more efficiently across various environments.
  • A.I. Chatbots Defeated Doctors at Diagnosing Illness: Stanford researchers recently published a study in JAMA The Journal of the American Medical Association evaluating how physicians with and without access to ChatGPT-4 performed on diagnostic reasoning tasks. The study's secondary finding—that ChatGPT outperformed both groups of physicians—caught fire after The New York Times published an article on November 17, 2024, declaring, “A.I. Chatbots Defeated Doctors at Diagnosing Illness.” This claim, however, is fundamentally flawed, as is the study itself. according to this article by Sergei Polevikov. Here’s why:

  1. Tiny Sample Size: The study’s conclusions are based on a sample of just six physicians—an absurdly small number to draw any meaningful conclusions.
  2. Misleading Focus: The study measured diagnostic reasoning, not diagnostic accuracy. The difference is critical, yet The New York Times completely misrepresented this distinction, fueling a media frenzy based on a misunderstanding.
  3. Insufficient Training: The physicians were not adequately trained to use ChatGPT, undermining any claims about how AI might enhance medical decision-making.
  4. Lack of Real-World Complexity: Medical diagnosis involves far more than test-case reasoning. The study’s design fails to capture the complexity and nuance of real-world medical practice.

By amplifying a deeply flawed study, The New York Times has turned scientific speculation into sensationalized misinformation.

  • 2025 Top 10 Risk & Mitigations for LLMs and Gen AI Apps by OWASP: Speaking of…risks and assessing them: As an accessible introduction to AI risk assessment—and given the current fixation on Large Language Models (LLMs) as the epitome of AI—why not start with OWASP's newly released 2025 Top 10 list of LLM risks? Unsurprisingly, many of the risks focus on data issues, which explains why the EU and European Standardization Organizations (CEN/CENELEC) are heavily focused on addressing these challenges. They've published an impressive 60-page report detailing a range of issues and solutions, from leveraging existing standards to developing new ones—all while aligning with the requirements of the EU AI Act. This report is a must-read (if you can afford it because it's not free), offering valuable insights into the practicalities of AI Act compliance.
  • Critics have long pointed out AI's issues—hallucinations, biased training data, and a tendency to simplify human complexity. But now, PR firms are upping the stakes. Ruder Finn’s new tool, rf.aio, is designed to manipulate large language model (LLM) outputs, injecting promotional narratives to influence how AI answers public queries. The promise is to “optimize brand mentions” and “influence LLM responses,” effectively bypassing traditional gatekeepers like journalists. By crafting AI-generated outputs that appear neutral and credible, PR firms aim to shape public perception without the scrutiny of editors or fact-checkers. This trend poses a larger risk as journalists and news outlets increasingly rely on AI for research and content creation. The more promotional material is baked into these systems, the harder it becomes to distinguish genuine information from corporate spin. What we’re seeing is the start of an arms race to control AI’s narrative power, where the biggest players dictate the information billions consume. Predictable? Yes. Concerning? Without a doubt. It must be a coincidence that Misinformation ranks #9 on the Top10 list of LLM risks by OWASP.
  • NIST AI 100-4 Reducing Risks Posed by 6 Synthetic Content: NIST’s draft AI 100-4 outlines promising technical approaches to improving digital content transparency. But let’s be honest—when the “liar’s dividend” pays out in dollars and election victories, fairness and transparency are unlikely to flourish. In an industry that profits from exaggerating the actual capabilities of AI, it’s hard to imagine misinformation taking a backseat anytime soon. Transparency might be technically feasible, but in a field fueled by deception, it’s far from a priority.


Conclusion

That would be all for this edition of the newsletter! Do not hesitate to reach out to us or to our wonderful guest, Emerald De Leeuw-Goggin , with any follow-up questions or comments. Otherwise, see you all in January!😉


To view or add a comment, sign in

More articles by RegInt: Decoding AI Regulation

Insights from the community

Others also viewed

Explore topics