AI Tastes Like Chicken
Photo by Nik on Unsplash

AI Tastes Like Chicken

There’s a good chance every company you work with in advertising and marketing recently announced their Artificial Intelligence (AI) products, intentions or both. Maybe like me, you have questions like, “Didn’t they have AI before? Why is your product better with the new-new AI? Where does your AI come from? Is your AI organic or free-range?”


Let’s look at Microsoft, for example. Companies can bake AI into their Microsoft Office subscriptions for the low price of $30 a month per user. That is unless like many companies, yours doesn’t allow it because of security, compliance and/or risk concerns. Also, Bing already offers AI tools for free, and Azure’s packages regularly include it, so why do I need to pay $360 annually for the Office add-on again?


Another example is adding a dash of AI to Salesforce. Wait, isn’t that Einstein? Or is it different? Oh... it's Einstein GPT! I see what you did there.


AI stuff in Facebook - YES! Wait - isn’t that the performance optimization algorithm? Not this time - it’s AI ads! But wasn’t there already a mostly-automated ad builder for dynamic copy and images?


Adobe, The Trade Desk, Google, DSPs, SSPs, MTAs, Models, Coding, Journeys, and so on, all have AI in the mix and all share similar stories. AI has been there for years under the label of machine learning or an algorithm or a custom model or automation. Now it’s all rebranded as AI, adding confusion to an already challenging topic. 


Because of the AI noise, there's a question in need of an advertising-aligned answer: If everyone has AI, then what’s so special about these rebranded AI-flavored products or features to fuel creativity and originality? And since most of it is about text and language, I figured I’d focus on those models first.


What I found in my research is that after playing around with text-based AI for a while, people discovered the answer to be “not much” in the way of inspiring creative output as it was bland. Very bland. Ian Bogost from the Atlantic says it succinctly - “the bot’s output, while fluent and persuasive as text, is consistently uninteresting as prose. It’s formulaic in structure, style, and content.” And that’s been my experience as well in marketing ideas or creative writing use cases. Everything AI-flavored looks and tastes the same, providing the absolute minimum in creative marketing nutrition. Simply put:


AI is just like a piece of chicken that everything tastes like. 


And that’s far from interesting or good enough for advertising or marketing to use as a foundation for, or source of, creativity. So let’s open a cookbook to the chicken section and see how to best apply AI to our marketing dishes. We can improve our use of AI by learning from lessons of cooking tasty chicken and make sure every AI dish doesn’t look or taste the same. Here are some ingredients and recipes to consider when making your AI-crusted chicken dish at the office so your conference room meeting party won’t be served the same bland, boiled, basic chicken:


Chicken and the Rainbow Plate

A piece of gray, overcooked and bland advertising or content can’t be the only source of protein consumers are asked to choke down from brands. To avoid this scenario, every AI tool needs a rainbow plate of content for training data to avoid losing all nutritional value. More specifically, AI needs original, human-created content for ongoing training Language and Image models. If the world becomes a bunch of bland, AI-generated content, the models break down over time like a snake eating its own tail. From Insider, “the math shows that "within a few generations, text becomes garbage," one of the authors, Prof. Ross Anderson of the University of Cambridge, wrote in a blog about the findings. Images, too, lose intelligibility, they said.” 


Chicken Benefits from a Flavor Catalyst

Chicken by itself is bland, unoriginal and everything tastes just like it. A great tasting piece of chicken needs something to activate and enhance the flavor. The way you cook it (BBQ/choice of wood), what you cook it in (water or oil; pan, grill or fryer) and the sauce you make for it (lemon, wine, garlic, tomato) all matter. AI similarly needs spice given the output is bland. Output needs abstract input to make the output original, compelling and mean something to the reader or audience. And such, AI needs a human-designed starting point. I agree very much with Jared Newman’s take (Fast Company) that “the most useful generative AI tools are less fixated on churning out new text, and instead helping people make sense of what’s already out there.”


Cooking Chicken When it Isn’t Chicken

Language and Image Models that are prominent in tools like ChatGPT and Midjourney are too new to be trusted blindly. Unfortunately, the makers of these tools are not putting a trustworthy blanket around AI, nor in many cases even offering warnings that all facts should be checked for accuracy, every time. It’s noteworthy and emblematic that ChatGPT’s accuracy is getting worse. It’s also likely makers of custom AI models are not making customers aware that the model may have a shelf life of days/weeks due to AI-to-AI training. Transparency in how AI is trained is also crucial to the expected performance of the model. It’s worth repeating Venture Beat's findings that “ use of model-generated content in training causes irreversible defects in the resulting models.” As such, it's on us to ask questions about test learnings, sourcing pools and training methodologies before buying AI solutions to force at least some transparency. Otherwise you might be buying watery tofu when you thought it was real chicken.


Chicken Needs a Marketing FDA

If undercooked, chicken can make people sick and in some cases, salmonella is deathly ill. AI isn’t all that different. Per the World Health Organization, there are very real human and societal health consequences to improper use of AI. Even in corporate society, if employees are using AI without oversight, you won’t catch marketing health risks before being released into the wild. At the very least, put some safety guidelines in place to guardrail how AI is used, applied and released into companies. For the AI itself, three initial areas to focus on are: 

  1. Know/Require sourcing of training data, it's age, credit & consent practices
  2. Include liability terms to hold companies accountable for the products they made that sits on top of AI or accesses via APIs
  3. Apply a health-first process that puts facts & trustworthiness above all else, especially cost efficiencies as a lawsuit wipes out savings anyway


Chicken Can Become More Than Chicken

Imagine a time when chicken will become a movement. Sound crazy? To words: Popeye’s 2019. Yes, the chicken sandwich that became a phenomenon was from Popeye’s, leading to both great flavor and even greater lore. For the brand, publicity alone delivered $65 million in media value in two weeks. But the stories - they’re legendary. From selling out months worth of inventory in two weeks, to a heated Twitter battle with chicken & burger chains alike, to a $7,000 eBay sandwich posting, to a run on stores so powerful, you took your life in your own hands attempting to get a chicken sandwich. So while brands imagine a world where AI divines the next cultural phenomenon, it won't happen on its own. AI as a tool is only as good as the humans who use it to creative ends. The only way chicken becomes more than chicken is when you make AI work for you and your cause, not the other way around.


Don’t be a Chicken. Make AI Work For You

Taking a recipe-style approach is a good starting point for your teams. Here’s an approach I’ve found to be very useful as a way to establish an AI working model for yourself and your teams.


Step 1 - The initial idea is always human. Prompts are ideas, even as questions

Step 2 - AI accelerates the starting point (outline, image)

Step 3 - Humans refine & expand the output to a more-than-working state

Step 4 - AI refines the human output, getting it a point of editing & fact checking

Step 5 - Humans sharpen & approve the final crunchy, delicious, chicken sandwich output


As an example, I use this approach to customize the introduction and summary sections of my resume. Usually it starts by dumping the job description into ChatGPT with a prompt to create a resume intro section. Then, I rewrite it, incorporating the topics and themes AI designated as important and add my tone, tenor and prose. Then, I’ll ask ChatGPT to refine it using the new paragraph and a link to the job description. If it’s too long, I’ll ask it to remove 10-15 words. After a round or two of human editing and a dash of my secret spice mix, it’s good to go. 


What would have taken hours is now a 45 minute exercise to market yourself for a specific role or situation. But the use cases don’t stop there. Here’s three business applications for the easy-bake, workshop oven: 

  • Brainstorming to market someone else’s product is another area that can be made more agile. Workshops go from preparation to polished outcome in a day, versus a typical multi-day timeline. 
  • RFP/RFI responses can be created by retraining using historical proposals, reducing time, cost and anguish of herding cats.
  • Meeting summaries with action item assignments and an executive summary are delivered within an hour after the meeting using AI synthesize the transcript and a human to edit, fact check and deliver it. No more aggregating team feedback, staying up late to get it out before the next day.


Starting, Finishing and Speed to Delivery. AI is never in the middle. And if it’s a rote or data-oriented task, AI might not be the best automation option.


Summary of Findings & POV

AI has been built by select humans, and like with most tech races of late, hasn’t been designed well enough to benefit all humans. The watch-outs are many, especially as scientists use biased models, coupled with risk-laden and misinformed data to train AI that builds itself. Risks are growing exponentially the longer we go without legal and accountable oversight and trustworthiness requirements. Companies like Microsoft are disregarding public safety by eagerly pushing AI in an effort to gain competitive advantage (aka: profits) which further compounds business and societal risks. Therefore, to mitigate these public and private risks, marketing & advertising must fill the policy gap with protocols of our own.


As stewards of brands and their messaging, it’s more important than ever our industry take a trustworthy, stepped approach to incorporating AI into practices and products. The last thing brands need is to poison guests with undercooked chicken data, bore visitors with bland experiences, or serve fake chicken instead of the real thing without telling customers. And though AI cannot create a new recipe, it can free up time to spend creating the next great chicken dish by saving time using an industrial-sized dishwasher to clean pots & pans.


So here’s a challenge: Make an AI chicken dish so delicious, it creates a positive marketing movement. Let’s use AI in our company kitchens to become a Top Chef of AI marketing.

Danielle Alimecco Lee

Business Growth | Strategic Partnerships | Enterprise | Team Leadership

1y

Love this Jay! Great round up of bits I've been thinking about in a concise expanded view. I appreciate the tactical use cases and recommendation of the iterative editing process. For me, I continue to question AI - question the products and the outputs. It is very much a human guided process and a tool (not a replacement for human thought). We can't leave it to 'speak for itself'.

Joe DeVita

Managing Partner at Moving Traffic Media

1y

Great clucking read!

Adam Rattner

Chief Growth Officer | Integrated Marketing | Media Experiences

1y

And if it tastes like chicken . . . we need to be able to tell if it's been ethically sourced 🐣 . Clients need to have full transparency to every use of AI across the supply chain and the data sets that they have been trained on. Judicious use of AI will spur new levels of creativity. But multiple layers of AI driven ad tech will inevitably lead to intrusive and manipulative consumer experiences, eroded brand safety, and reinforced biases unless we have the proper frameworks to monitor and regulate. Cluckin' good read Jay Krihak🐔.

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics