Essential AI Map for Print Service Providers and Creatives
Introduction
Artificial Intelligence (AI) is among the most talked-about topics today across various fields. However, I often notice that people (all people) are expected to instantly understand the acronyms, technical terminologies, and contexts associated with AI.
I never take anything for granted, especially when discussing something relatively new that evolves exceptionally quickly. AI is still a somewhat nebulous concept for many—not due to ignorance, but simply because they lack time or opportunities to delve deeper into it.
Just consider the dozens of innovations that have emerged in recent months in the world of AI models—new platforms for images, videos, or high-performance smaller models.
I've asked myself how a printer or creative professional can keep up with all this, mainly when we're focused daily on creating graphic projects, ensuring colour quality, configuring printers, and achieving the final result. In short, we focus on what primarily helps us grow professionally and personally while trying to catch up on what's happening.
However, AI is not only a tumultuous present but also the near future. It's here to stay, and we need to be able to contextualise all the new developments, whether it is a new standalone platform to create images or when integrated into our graphic or project management software. Understanding AI will also help us evaluate proposals from suppliers and consultants about using our data for business applications.
This article aims to explain what we need to know about AI, its components, how they relate, and their connections.
When we think of AI, we can imagine it as a series of subsets. AI encompasses increasingly specific disciplines. We'll use this analogy while acknowledging that we'll simplify some concepts anyone can explore further.
The AI Map and Its Main Components
Artificial Intelligence (AI)
Let's start from the beginning with the concept of Artificial Intelligence. As we already know, AI is a discipline within computer science that focuses on creating systems capable of simulating human behaviour, such as making decisions, suggesting solutions, recognising images, or understanding natural language. AI aims to enable machines to perform tasks that would typically require human intelligence. AI is a broad umbrella that covers various other concepts, or if you prefer, think of it as classical physics that we study in school, which encompasses other subjects like thermodynamics, magnetism, or mechanics.
The Difference Between Traditional AI and Machine Learning
Before exploring Machine Learning, which is the first step within AI, it's essential to understand the difference between traditional AI systems—used almost exclusively before the advent of Generative AI—and those based on Machine Learning, i.e., systems that learn independently.
In traditional AI systems, also known as rule-based or conditional systems, 'if-then' rules (if this happens, then do this) are used, where every possible scenario is pre-programmed by an expert (hence also called expert systems). This approach leverages knowledge and experience in a particular field (known as a knowledge base) and provides solutions based on a series of sequential responses. For instance, such systems have been used in medical diagnostics, product configuration, maintenance procedures and fault identification.
The problem with these systems is that they need to be revised and reprogrammed each time to adapt to new scenarios. For complex situations with many variables, this represents a limitation given the resources required without gaining flexibility. Hence, the need arose for more effective alternatives to autonomously manage new or unexpected situations.
We mentioned traditional AI because, although less common than in the past, conditional models are still used where data volumes are not large, complexity is manageable, or there are cost constraints on projects.
Machine Learning (ML)
Within the umbrella of AI, we find Machine Learning. ML is a sub-discipline of AI that focuses on algorithms and models that allow machines to 'learn' from data. In other words, instead of explicitly programming every step, ML systems identify patterns and make predictions or decisions based on past data.
An example of how this works is a music app suggesting new songs based on previous listening habits or traffic predictions based on historical data. Here, Machine Learning techniques are being utilised.
ML has been revolutionary in its field, particularly with the adoption of increasingly sophisticated neural networks, which have further segmented the possible applications, as we will see below.
Deep Learning (DL)
Within Machine Learning, we find Deep Learning, a verticalisation of Machine Learning applications. Deep Learning utilises highly sophisticated artificial neural networks with multiple layers (hence the name Deep), structures inspired by the human brain and implemented through mathematical models to analyse large amounts of data and make precise predictions.
DL, with the vast amount of available data and the reduction in data processing costs, has enabled many recent innovations in AI, such as image recognition or natural language comprehension, which are too complex to be managed through standard ML techniques. When you upload a photo to a social network, the system automatically recognises faces in the images, and that's Deep Learning in action.
Generative AI
A subset of Deep Learning that has gained immense popularity is Generative AI. This subfield of AI uses Deep Learning models to create new content, such as images, text, audio, video, or code, from existing data.
Generative AI does not just make predictions but can generate something completely new based on user commands, often provided as 'prompts' in natural language.
Within Generative AI, we find the most popular applications today, from ChatGPT to MidJourney, Claude to DALL-E, Sora, and Suno. The specific fields it works in include:
Generative AI utilises highly sophisticated algorithms with neural architectures like Transformers, which rely on multiple layers of software and thousands of parameters internally to deliver the expected results.
Today, the trend is to have models with millions of parameters because the higher the number, the better the model's performance (depending on the specific task). These models are trained on enormous amounts of data to identify and recreate patterns without human intervention, and you can interact with them through prompts, i.e., requests in natural language, as we discussed in the previous article.
Recommended by LinkedIn
It's important to remember that while AI offers very powerful tools, human creativity and directing creative choices based on client needs remain central and irreplaceable. AI is a potentiator of creative and productive capabilities, not a substitute for human professionalism, which can experiment more quickly and at affordable costs.
LLM - Large Language Models
We have seen that Generative AI is divided into applications. Text-based AI is powered by LLMs or large language models—a specific type of Generative AI mainly focused on text processing and management. LLMs, such as GPT-4, are models trained on enormous amounts of text to fluently understand and generate natural language. These models can write articles, answer questions, translate texts, create social media posts, generate ideas for brainstorming, develop marketing and sales strategies, and more.
The most famous LLMs include ChatGPT by OpenAI, Gemini by Google, Claude by Anthropic, Le Chat by Mistral, Perplexity, GroK by X, and Copilot by Microsoft. It's important to note that these models are now trained to work not only with text but also with images and programming codes (as well as various types of files like PDFs, Excel, etc.). That's why they're called multimodal or omnidimensional models. For instance, ChatGPT includes DALL-E for image generation.
Today, however, there is also a growing trend towards systems-based and trained-on-your-own data, which are not shared externally but are owned by the company. In this case, we refer to SLMs or Small Language Models, where company-specific data is mainly processed, such as data from CRM, CMS, company ERP, knowledge bases on SharePoint, or simple internal-use files (e.g., product or service files). While these models are less potent than LLMs regarding parameters, they are highly efficient. They can be used in applications where computational resources are limited, or you want to keep your data private from others. SLMs are seeing significant growth due to their ability to be implemented on less powerful devices while delivering excellent performance in specific tasks.
Another thing to know about LLMs and SLMs is that there are Open-Source models, such as Llama by Meta, which are downloadable and programmable by users and grow based on user experience and sharing. There are also Closed-Source models like ChatGPT or Gemini, which are closed by the company that controls their code and training.
A typical example of using LLMs is an advanced chatbot that can work on a database created and managed by the company to control the responses. The chatbot can be based on open or closed-source technology.
DM - Diffusion Models
Diffusion models are architectures for generating images, videos, and audio. They are extremely important for those working in graphics or creativity because most generative AI software for images uses this model. Image creation happens through a request made in natural language (prompt), just like in LLMs, which share many common solutions.
The software generates the requested image through specific operations (diffusion and denoising). Parameters can be added for particular reproductions (e.g., camera type or focal length) or to work on image details.
Stable Diffusion, MidJourney, Imagen, or Adobe Firefly are all software that uses diffusion models for image creation. Remember, these models are now essential for anyone working in graphics or digital printing, offering new tools to explore and innovate graphic solutions, create mock-ups and prototypes of graphic applications (e.g., packaging proposals or interior graphic settings), and create captivating presentations for clients in a short time.
Conclusion
Artificial Intelligence is a vast and complex field, but not necessarily obscure. It's just a matter of knowing that it comprises layers encompassing increasingly specific and powerful concepts, as we've seen above.
From general AI to LLMs, each concept plays a fundamental role in making modern technologies more intelligent and capable of helping us in our daily tasks. Our primary focus as PSPs and Creative people should be Generative AI based on what we see today.
Understanding how specific applications fit into the AI context can help clear up confusion or doubts or allow for more informed discussions with potential suppliers when considering system purchases or using them effectively.
As mentioned, each specific topic can be explored further, but just as you don't need to be a mechanical expert to drive a car, understanding the fundamentals of AI can be sufficient to navigate its terminology and applications easily.
Glossary
Artificial Intelligence (AI)
A discipline within computer science that focuses on creating systems capable of simulating human behaviour, such as making decisions, recognising images, or understanding natural language.
Machine Learning (ML)
A sub-discipline of AI that focuses on algorithms and models that allow machines to 'learn' from data. Rather than being programmed for every possible scenario, ML systems identify patterns and make predictions based on past data.
Deep Learning (DL)
A specialisation within Machine Learning that uses highly sophisticated artificial neural networks inspired by the human brain to analyse large amounts of data and make exact predictions. It is the foundation of many recent innovations in AI.
Generative AI (GenAI)
A branch of AI that uses Deep Learning models to create new content, such as images, text, audio, video, or code, from existing data. Users interact with these systems using 'prompts' in natural language.
Large Language Models (LLM)
AI models specialised in handling and generating natural language. These models, such as GPT-4, are trained on enormous amounts of text and can perform various tasks, from writing articles to translating texts.
Small Language Models (SLM)
Lighter and more focused versions of LLMs that can be trained on specific company data or limited resources. They are useful when you want to maintain control over your data or when computational resources are limited.
Diffusion Models (DM)
Architectures mainly used for generating images, videos, and audio. These models operate through a process of 'diffusion' and 'denoising' to create high-quality visual and auditory content based on textual descriptions or other input information.
Print Geek. Owner of Large Format Review wide-format print blog. MD at FORMAT Digital Marketing.
3moGreat article. Perhaps you wish to keep this as unique content for your own website, but if you are open to it I would love to share this on LFR to a wider audience, of course with full attribution and back links. Let me know. 👍
Great breakdown, Vittorio. As a cybersecurity team, we find AI's role in creative industries both fascinating and relevant. Thanks for making such a complex topic accessible! 🚀