Developing AI as a creative agency, the right way
Depending on who you speak to, AI is either the next industrial revolution, or the end of the world – without too much middle ground.
Forgive me for jumping on the “AI Bandwagon” on LinkedIn, but I think the conversation is an important one, and its something we have direct and daily experience of both from a development point of view, and as active users.
So, while I’m sure “Just Another AI Post” (Shoutout to the WordPress lot who get that terrible reference) will get lost in the noise, I thought a practical example of everyday development and use of AI, from a company that does both, for ourselves and our clients, would be a worthwhile addition.
From the Development POV, when we’re asked to integrate, build on, create or train an AI, we ask ourselves a few simple questions.
1. “Does this exist already, and can we use it?” – This is a common enough general development question, and one that any developer should always ask, regardless of the task – after all, why re-invent the wheel? There are already thousands of different AI apps, models and integrations that cover a whole host of problems, and often enough there will be one that does what’s needed with no (or very little) actual dev time needed.
2. “If it exists, how does it handle data?” – This is the single most important question any AI user should be asking, especially businesses and even more so if those businesses (like ours) handle often sensitive user and corporate data. Its quite frankly terrifying how little users and other companies seem to consider this question.
Apart from the fact that you should always ask this of any service that you give data too, it’s important to remember that AIs are not truly intelligent. LLMs (Large Language Models) are at their core fancy search engines that are always growing and absorbing new information – including anything you give it. This means that not only does the third-party owner of that AI have access to the information, but potentially so does every other user of this AI.
Recommended by LinkedIn
This probably doesn’t matter if you’re using a chatbot assistant to answer simple questions about your website for example, but what about, say, asking and answering information about a user’s medical history or asking for a summary of a sensitive corporate document? Once done, it could be entirely possibly for that data to be provided to another user, if they were to ask something innocuous like for example “Generate me an example of a companies end of year accounts” which could, in all possibility return an example that references information used to generate summaries by other companies.
3. “Is it a moral/ethical use?” – Another very important question, but one that can be hard to answer as it can be very much down to user perception. Obviously, an AI that is designed to do something like racial profiling of users for marketing purposes is an absolute no… but what about an AI that is designed to optimise business workflows such that less staff are needed, resulting in job losses and redundancy? A pretty common occurrence.
On the surface, optimisation is a normal business operation, one that has been around as long as businesses have. But we’re no longer talking about improving a process such that one or two employees may be made redundant, but instead we’re seeing whole creative teams being replaced by text-to-image models, or whole translation teams being replaced by AI.
Such “optimisations” for the sake of it, can only be a bad thing. For one, AI is not (yet anyway) capable of original thought, and thus can only create the semblance of original content, but perhaps more importantly AI cannot match the human understanding of a Brand, or the nuance of how a conversation in, say, Japanese would occur in English, where the context might be the same, but the route to that context may not be a direct translation at all.
The best analogy I can think of here is that whole team replacement with AI is akin to replacing your Ox and Plow team with a tractor but asking one of your Milkmaids to operate it (Yes, I grew up on a farm). They might be able to after a fashion, but all that experience of the job will be lost. Whereas, if you train the Milkmaid to run your new AI powered Rotary milking parlour (Actually a thing), and your Ox team handler to drive your new tractor, you might be able to have fewer staff, but you’ve upskilled the team, made it more efficient and most importantly, kept the experience.
We believe that the most ethical, and commercially sustainable approach to using AI is very much as an aid to work, not a replacement for employees. There will inevitably be job losses to AI, where the use of it can reduce the workload for a team, but the eventual shift is more likely to be from “doer” to “operator”, much like it has been in agriculture and manufacturing as technology has advanced over the last several hundred years.
It is our responsibility to our staff, and our clients to ensure that this transition happens (and let’s face it, it is happening, whatever some may wish) such that they feel empowered to use new tools, to become the operators of this technology, not to feel pushed out of their industry and left behind. There is opportunity here for better paid jobs, more efficient business, a better work/life balance – if we handle this transition with care, taking a human first approach.
Well, that got a little long – You can check out Part Two Using AI as a creative agency, the right way – here.
Director at Datalibrium LTD
7moMy Dad has a degree in AI ( I think ) from the 80's. He was well ahead of his time.