Artificial Intelligence and Printer Professionals (but not only): Two Tips For a Good Start

Artificial Intelligence and Printer Professionals (but not only): Two Tips For a Good Start

I know. Yet again, seeing the term artificial intelligence might evoke reactions ranging from a shrug to nausea, depending on how exposed we've been to the topic. But something is happening, and it's happening now. Regardless of our interest, disgust, fear, or curiosity, artificial intelligence is growing and increasing rapidly. If you feel lost and struggle to understand how this revolution might (indeed, will) impact you, welcome to this large club, of which I am also a member.

The positive side is that this moment is perfect for carving out some time and grasping some fundamental concepts, no matter what the future holds or how artificial intelligence will evolve in our field. These insights are based on my experience, trying to go beyond the surface of slogans and understand how a professional with a business today can begin to take the proper steps in contextualising this technological innovation. I assure you that it can propel any organisation forward if well understood and used. So, let's begin.


Once Upon a Time

Anyone who has been in digital printing for a while and looks back will see how things that seemed absolute and unshakeable have changed, evolved, or disappeared. From hand-painted signs to large-format digital printers, from letterpress to digital offset, from milling to 3D printing, from pen plotters to inkjet, from mass-produced screen printing to single, personalised products with digital technologies.

Consider how we communicate our business has evolved and created new things. First was word of mouth, posters, radio or TV (depending on the budget). Then, email, newsletters, websites, and the need to be found, as well as SEO, social media, online paid advertising, keywords, images, videos, YouTube, reels, TikTok, and content. All things that didn't exist before and were difficult to foresee.

U-turns have suddenly changed our way of working. As economist Oren Harari said, the invention of the light bulb did not come from continuous improvements to candles. At some point, there is a turn. However, we can avoid being caught unprepared by this turn.


Fasten Your Seatbelts

For artificial intelligence, I would focus on two things.

First, understand, at least broadly, what's behind artificial intelligence on a technical level so you can quickly evaluate the various proposals that will come your way. Realise that artificial intelligence is a vast field, encompassing different methodologies that make machines perform operations that simplify and assist humans.

Machines can be trained using Machine Learning procedures, which become more sophisticated when using architectures based on neural networks. These networks mimic our neurons and are complex mathematical systems that work like a neuron and its connections, giving more or less importance to specific inputs depending on the desired output.

The evolution of neural networks, the hardware support for performing enormous calculations with large amounts of data, and increasingly sophisticated algorithms (think of algorithms as a set of instructions for our AI system to perform specific tasks) have all led to a subset of Machine Learning called Generative AI.

Take a breath now; this premise was necessary to reach a point that I consider fundamental for us: understanding what is happening today.


Generative AI and Beyond

Generative AI, as the name suggests, generates new things that didn't exist before. While a classic AI-driven system, like Netflix's recommendation system for the next movie you might like, performs just that task, generative AI creates new content in its domain. This happens simply by asking it. Incredible, right? But as with people, you need to know how to ask. We'll come back to this in a moment.

Generative AI has another subset called LLM, Large Language Models. These AI systems focus on text. They can precisely predict the correct words to compose a response to the request we've cheerfully entered into our ChatGPT.

I'll simplify this concept (and apologies to the more technical readers). If I ask a person, "The pencil is...," they will likely respond with "on the table" or "on the desk." These are statistically probable answers. Less likely is the answer "underwater." It could be, but statistically, it's very improbable.

How did we teach LLMs to respond correctly to our requests? By feeding them tons of digitally available content, training the LLM models with Machine Learning techniques, providing questions and answers, and refining the responses when they were inadequate. I've simplified this horribly, but that's the gist of it.

Text models seek the most probable response to our request. How? Vectorising the text, like RIP software, turns it into a number. The vectorised words are arranged on multiple virtual Cartesian planes within the model. In the previous example, pencil, table, and desk will be close together, while underwater will be farther away, making it less likely to be used as a response.

ChatGPT, Gemini, Claude, and CoPilot are LLMs. GPT stands for Generative Pre-Trained Transformer, a system that generates (generative) and has been trained and refined with vast amounts of data (pre-trained) based on a very sophisticated neural network called Transformer.

These models, which seem human, are statistical models trained on human intelligence products that artificially mimic our behaviour. These are the things we need to know.

Understanding how these systems work, where they come from, and how they are composed will allow us to know how they will be used within our software and demystify the various terminologies used by suppliers and consultants. We will also understand their limits. For instance, systems programmed to predict will always answer, even if it's invented (this is called hallucination). Their goal is always to perform their task, which in this case is to provide an answer. Rather than a problem, it's paradoxically their nature.

So?

LLMs are becoming less generalist and more specialist. What changes? The data set they are trained with. Eventually, you will be offered AI-based systems, likely LLMs. These LLMs will primarily work with your data.

Imagine connecting your document section, knowledge base, web analytics, or CRM and leveraging the power of these currently fragmented data across different platforms or databases (or Excel files). Imagine connecting these systems, exploring market trends, and extracting, with a simple question, a trend or a way to promote yourself in a market or to clients who have already made purchases. Imagine a chatbot that responds 24/7 using your data and approach to generate interest and contacts.

Additionally, robotic systems will be increasingly implemented in production (either due to a labour shortage or because employees can do more creative things while robots handle repetitive tasks). They will execute instructions based on natural language commands.

Therefore, it is essential to know the technology (at least broadly), how it works, its inherent limitations, and especially how to communicate with it. And here's the second point. Let's see it together.

Talking to AI

Let's take a step back. We have seen that LLMs are a subset of generative AI. Generative AI doesn't just include LLMs focused on text and systems for generating images, audio, video, speech, and programming code.

Platforms like DALL-E, Midjourney, or Firefly for images, Runway, Pika, HeyGen, or Sora for video, and Stable Audio, Suno, or Whisper for audio offer various possibilities for generating what we need.

What do these software have in common with LLMs? We can ask for what we need. There is a language that allows us to interface with them, and we program these systems with our natural language.

No wonder one of the most popular platforms is called ChatGPT, where chatting (talking) is visually represented by a text box. The explosion of AI in diffusion began when OpenAI made it possible for everyone to speak to the platform naturally (also called natural language programming).

The systems are based on this. In jargon, the instruction we give is called a prompt. For the more seasoned, the prompt was where commands were entered on old MS-DOS systems. Today, the prompt is a window, and knowing how to ask our AI for what we want is fundamental.

Not for nothing, a new speciality called prompt design or prompt architecture has developed. It involves appropriately contextualising the request and iterating multiple times to refine it and get results that make a difference. Again, we program the machine with language in a nearly similar approach and follow a few simple rules:


  • Provide context, telling the AI who to impersonate;
  • Define the task it must perform;
  • Give instructions on how to do it and how to reproduce it (tone, approach, sentiment);
  • Define the audience the result will "speak" to;
  • The output format (language, length, formatting);
  • If necessary, set limits (don't mention x or y);
  • Provide examples so the machine understands what to do (this technique is called few-shot learning).

You can also ask our LLM to review the prompt and suggest improvements. On Generative AI platforms for images and video, besides text, you can enter parameters to obtain specific outputs (for example, for images, you can specify the focal length, type of camera, or film).

Remember that LLM systems, primarily built for text, are now defined as multi-modal or omni-modal. This means that in addition to textual input and output, they can also accept images or other types of files (spreadsheets, PDFs) and return output in images, tables, and more. They often combine different generative modes. Other functions will likely be supported soon as this topic is evolving.

However, a ton of material is available, and there are better places to delve into prompt techniques. I mentioned the above to show how important it is practically.

It is to learn to write our requests well.

This is the second point, then. Along with technology, we must master the dialogue.

In Conclusion

As you can see, the topic is vast and, therefore, complex, at least initially. I've simplified some concepts, omitted some details, and overlooked ethical implications, but they were not essential for the article's purpose.

By mastering the technology (at a high level, of course, without becoming programmers) and the way to communicate with the platforms, we are already taking a big step forward. Then everything will seem magically more straightforward, and we will be able to handle supplier or agency proposals, understand AI integrations into our existing software, and follow evolutions without getting lost.

How to do it?

Primarily in two ways. Take courses and carve out space to deepen your knowledge. AI encompasses not only technology but also ethics, philosophy, and finance. This shows you how it is an event that impacts our contemporary world at a very rapid pace.

Budget for a good course, preferably in person, for you and your staff. Many agencies offer them, or you can hire a consultant to come to your company.

Ask to delve into generative AI, LLMs, and prompt techniques, as well as the various activities you can achieve with good prompts: brainstorming, content creation, data analysis, social media calendars, massive post creation, new ideas for clients, competitor analysis, writing guides and articles, reviewing our texts, images, video clips, translations, videos in other languages.

There is no limit. Because of this, we must learn to guide it and make it our own. You won't regret it!

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics