The Imitation Game and the Future of Creativity, early notes

The Imitation Game and the Future of Creativity, early notes


Apart from the low analytical capacity of this journalist to define a symbolic stance and its capacity to represent or express a state of the art in a specific symbolic language as music, in a highly redundant domain like a "local" pop music ritualisation (Sanremo Song Festival),

we are facing a wider and deeper human fragility (and our exceptional power at once!):

fashion mechanisms and fashion cycles, imitation games, identitarian rituals and groups, are part of the solidarity structure of human kind, a representational and referential reductionism and collective (mind) necessity we will never underline enough.


What is Imitation?

Imitation, from a logical perspective, can be understood as the process or act of copying or reproducing the properties, behaviors, or actions of a model. In the context of a popular music piece, imitation involves replicating elements of that piece, such as its melody, rhythm, harmony, or style, either exactly or with variations. The logical structure of imitation in this context can be analyzed through several key components:

Model or Original: This is the entity being imitated—in this case, the popular music piece. It serves as the reference point or standard for the imitation process.

Imitator or Copy: The entity that performs the act of imitation. This could be another music piece, an artist, or a software that generates music, attempting to replicate or embody elements of the original piece.

Attributes to be Imitated: These are the specific qualities, features, or aspects of the original that are subject to imitation. In music, this could include the following music dimensions:

Melody: The tune or main thematic element of the piece.

Rhythm: The pattern of beats or the temporal structure of the music.

Harmony: The combination of simultaneously sounded musical notes to produce chords and chord progressions.

Lyrics: The words of the song, if any, and their stylistic delivery.

Production Style: The particular way in which the music is produced, including instrumentation, sound effects, and mixing techniques.

Fidelity of Imitation: This refers to the degree of accuracy or closeness to the original that the imitation achieves. Fidelity can range from exact replication to loose interpretation, depending on the intent and context of the imitation.

Purpose of Imitation: The rationale or motivation behind the imitation. This can vary widely, from homage and study to parody, innovation (by incorporating elements into new works), or even plagiarism.

Transformation and Innovation: Often, imitation is not just a passive replication but also involves transformation, adaptation, or innovation. This can lead to new creative works that, while rooted in the original, offer something new and distinct.


In a logical analysis of imitation, it's important to distinguish between mere copying and creative imitation. The former involves a direct and often unthinking replication of the original without significant alteration. In contrast, creative imitation acknowledges the source while transforming it in a way that adds new value or perspective, thereby contributing to the evolution of the art form.

Applying this framework to a popular music piece, one could analyze how other artists or works have imitated that piece by identifying specific attributes that have been copied or adapted, evaluating the fidelity of the imitation, and examining the purpose and outcomes of such imitations in terms of their contribution to musical innovation or discourse.


And now, the old mantra: What is the human factor we often refer to? What is this journalist afraid of, when reassured and reassuring his readers about the commitment of a bot - btw, designed by humans - in respecting human skills and capacities? What are the meaningful insights and skills of a human, formulating a symbolic discourse?

First of all the capacity to refer to a tradition of practices that got to the very same point of conceiving and implementing that very same robot!

Plus, constantly introduce innovation, a different degree of creative imitation actions, based on a social, embedded awareness of the allowed degree of replication/innovation: an innovation agenda that has strong phenomenological and functional expectations among the reference society - well, now it's the World; the novelty and radical innovation capacity definitely reaches far out beyond what even the current bot designers could ever imagine, let alone the bots themselves. As often repeated by philosophers and epistemologists involved in Ai, one thing is to quote and interpolate, copy pasting, i.e. the amazing intelligent and efficient librarian and tutor LLM are, another thing is Ai performing extrapolation and introducing relevant novelty in the generative discourse.

An epochal question lies underneath: how many humans we know can introduce this and an overall level of novelty in their daily (data/practice) flows? this is the real issue here.


Back to the imitation game: why bots are so good at it and what do we expect from them

The ability of Large Language Models (LLMs) like me to imitate human-standardized behavior involves several factors that extend beyond merely having access to an abundance of data on average human performances. Let's explore these factors through an theory of knowledge lens:

Abundance of Data: Indeed, the primary strength of LLMs comes from training on vast datasets composed of human-generated text. This data encompasses a wide range of human knowledge, discourse, and interaction patterns, allowing the model to learn and replicate the average or common patterns of human behavior, language, and thought processes. However, the ability to imitate such behaviors is not solely attributable to the volume of data but also to the quality and diversity of the data, which teaches the model about the variability in human behavior and language.

Conservative Capacity of Current AI: LLMs, by their design, are conservative in their approach to generating responses. They tend to prefer statistically common or average outputs over novel or untested ones. This conservatism is not a limitation per se but a feature that ensures reliability and coherence in the model's outputs. It reflects a current understanding of AI's ability to deal with robust features of language and behavior—prioritizing accuracy and safety (minimizing harm or misinformation) over innovation in many cases.

Human Biases and Expectations: The training data for LLMs inevitably includes biases present in human-generated content. These biases can shape the model's understanding of what constitutes "normal" or "standard" behavior, potentially reinforcing stereotypes or commonalities at the expense of diversity and innovation. Moreover, humans may not always require or appreciate extrapolation or novel insights from AI, particularly in contexts where familiarity and predictability are valued over novelty. This can lead to a feedback loop where AI models are trained to cater to existing expectations and biases, rather than challenging them or promoting meta-learning.

Limitations in Current AI's Meta-Learning and Extrapolation Capabilities: While LLMs demonstrate remarkable abilities in pattern recognition, language understanding, and even certain forms of reasoning, their capacity for meta-learning—learning how to learn—and extrapolation beyond the data they have been trained on is limited. Meta-learning and genuine innovation require an understanding of underlying principles and the ability to apply those principles in novel contexts, a challenge that current AI models can struggle with. This limitation is partly due to the nature of the models' training, which optimizes for performance on seen data or tasks closely related to that data, rather than for the ability to generate fundamentally new insights or solutions.

Evolving Capabilities through Advanced Techniques: There is ongoing research aimed at enhancing the ability of AI to perform meta-learning, extrapolate beyond its training data, and handle more robust features of human behavior and language in innovative ways. Techniques such as few-shot learning, transfer learning, and reinforcement learning from human feedback are being explored to improve the adaptability and creativity of AI models, moving them towards a more nuanced understanding and imitation of human behavior that can go beyond mere replication of average performances.


The capacity of LLMs to imitate human-standardized behavior is multifaceted, influenced by the abundance and nature of the training data, the conservative design of current AI models, human biases and expectations, and the inherent limitations of AI in meta-learning and extrapolation.

Advancements in AI research and development are gradually addressing these issues, aiming to create models that can more effectively understand, innovate, and extrapolate in human-like ways.



On the other side, the capacity for extrapolation in AI models, including Multimodal Models (MMLs) and AI systems more broadly, varies significantly across different types of tasks, models, and domains. Extrapolation, in this context, refers to an AI's ability to apply learned knowledge or patterns to new, unseen situations or data points that fall outside the distribution of its training data. Let's explore the state of this capacity in various AI domains:

1. Language Models (LLMs)

As said, large Language Models (like GPT-4) have demonstrated impressive capabilities in generating human-like text, answering questions, and even performing some forms of reasoning and problem-solving. However, their capacity for true extrapolation is limited. While they can generate plausible-sounding responses based on patterns seen during training, their ability to generate novel insights or to understand and apply new concepts discovered post-training is constrained. They tend to perform best on tasks closely related to their training data and struggle with truly novel scenarios or with reasoning that requires a deep understanding of causal relationships.

2. Multimodal Models (MMLs)

Multimodal models, which process and understand multiple types of data (e.g., text, images, audio), have made significant progress in areas like image and text generation, object recognition, and more sophisticated tasks like visual question answering. These models can sometimes appear to extrapolate by applying learned concepts from one modality to another (e.g., generating a relevant image from a text description). However, their extrapolation is often based on recognizing and applying patterns across modalities rather than on a deep, conceptual understanding of the content. They excel in tasks that involve correlation and pattern recognition across modalities but face challenges in tasks requiring deep, domain-specific reasoning or creativity beyond the scope of their training data.

3. Reinforcement Learning (RL) Models

RL models, which learn to make decisions by optimizing for a reward signal, have shown some of the most clear-cut capacities for extrapolation, particularly in closed environments with well-defined rules, such as games (e.g., AlphaGo, OpenAI Five). By exploring and exploiting the space of possible actions, these models can discover strategies and solutions that may not have been explicitly present in their training data. However, their ability to extrapolate in more open-ended, real-world scenarios is still a subject of ongoing research and development.

4. Predictive and Generative Models

Predictive models in areas like weather forecasting, financial modeling, and generative models in drug discovery and material science are designed to extrapolate from known data points to make predictions about the future or unseen conditions. The success of these models varies widely depending on the complexity of the domain and the quality of the data. In some cases, they have demonstrated remarkable accuracy and innovation, but they often rely heavily on the underlying structure and regularities of the domain.


Challenges and Imminent, Future Directions

The main challenges in improving the extrapolation capabilities of AI models ranges from:

Data Limitations: AI models are often limited by the diversity and quality of their training data. They struggle with situations that are underrepresented or absent from the data. Handmade Ai can offer highly specialised dataset and bring in Human/Machine interaction perspectives with a strong data augmentation-expansion approach, and a novelty measure becoming part of the training data.

Causal Reasoning: Many AI models are adept at identifying correlations but lack the ability to understand or reason about causality, which is crucial for true extrapolation. Causality of correctness, of mistakes, and a blend of them, will lead to explicate or explicitly manifest the higher meta-learning and granular causal logical chains.

Generalization: Moving from specific instances to general principles remains a challenge for AI, particularly in complex or abstract domains. Abstract datasets are still a topic, with their transfer learning advanced methodologies.


Research in AI is actively addressing these challenges through approaches like causal modeling, few-shot and zero-shot learning (learning from very few or no examples), and the development of more sophisticated architectures that can better understand and manipulate abstract concepts. The goal is to create models that can genuinely understand and apply logical knowledge in novel ways, significantly enhancing their capacity for extrapolation.


Imitation, Education

When it comes to formulate symbolic stances, it's sufficently clear that imitating at the lowest range of its (ever-changing) definition is copying and it's not absolutely possible just to copy an item 1 to 1, at least if not under a specific license scheme or in the conditions of the unknown or impossible reference to the original creator.  

it happens a lot in educational processes. We could say that Popular media cultures are still under a heavy educational or liberation process: lots of copy and low level imitation happen in popular culture phenomena, sclerosis, adoption reductionism is embedded in communication and cognitive patterns, technically and industrially the solidarity and wide synchronisation factors takes the novelty factor very very low: it's a reason of state, and to break these rules it's perceived as "putting in danger" the wider "sentimental" community - may this be a TV show or a hardliner hip-hop listener.

For sure it goes with demographical pressure and instant identification necessities. No judgement, just quantitative analysis.


Copyright, what now, in the generalist copy-paste Ai

A fundamental right of our multi-century legal culture is the copyright: what are the boundaries of imitation, and the introduction of a certain degree of protection in order to unleash innovation to be based on prior arts, and not to inhibit its potential: novelty and innovation are always possible, but often the degree of the required innovation to step out the “ip infringement boundaries” could even paradoxically conflict with the social request of intense imitation, and be mirrored by technology developments that serve reductionistic and copy-paste approaches, as current generative generalist Ai, fostering and fed back its (statically conservative) social application schemes.

Is this what makes us humans? Obviously as previously underlined, not exhaustively, neither educationally.

But then, strong synthesis of this discourse: how can you be surprised that most of human behaviors are highly replicable by robots?

How do you escape the Trap? Bots intended by humans to simplify and accelerate the interaction with and simulation of objective, complex and dangerous states of reality, are mainly used for their current high systematic imitation capacity, for all reasons reported above.

Bots and human culture in general, intended and vocating to simulate and predict the unknown, are used for their unlimited and super efficient and conservative imitation capacity, in social environments that promote strong and fashion-inspired imitation, practice that is then legally condemned.

Innovation is indeed what makes us more adaptive and resilient; but it's also what makes humans uncertain and insecure, 'cause the innovation takes always the high risk existential road, and by definition leads to uncertain paths.


Young Cultures, New (Synthetic) Generations, next memetic determination

Let’s take the exemplifcatory case of young cultures, to which always and cyclically popular cultures and their phenomena are associated, before becoming a widely exploited and systematised industrial phenomenon.

It's true that chaos means often "young": is chaos a trigger, a preparation scheme? A training hectic context, where to emerge with selection and personal synthesis. it's true that while you're growing up, it's fantastic to be exposed and to be even stoned by the amount of stimuli and the potential of open scenarios ahead of you.

But is it really true you have enough time to disambiguate and shape your ways out of the media forest you are exposed to? Initiation time, educational time, how many years can they last? is this uncertain "formation time" a burden for human societies around the globe? (We definitely feel the "education parenthesis" has never been so under discussion and problematisation since the 1960s).

[The attack on the Supernova Rave, October 7th, 2023, was particularly cruel and disgusting, without mercy, and apparently generational "peers to peers": was it showing the hatred for a specific suspension time the Israeli youngsters were exhibiting? Time of innocence that in other societies (the Palestian specific ones) and cultures is not affordable?].

Is it finally a very urgent necessity to have strategies for youngsters in order to promptly adapt and react to the accelerated and proliferating media complexity surrounding them? *

Instead of (investing too much time) imitating past existing patterns, past logical and symbolic cycles - practices that Ai is already well trained to execute -, why not to educate young humans to compose, inter-compose, experiment, meta-compose, radically and creatively correlating blocks of consolidated knowledges, and metaphorically transpose and so perform their (experimental) way out of multitudes of stimuli, (increasing) data quantities and expanding complexity models?

In the meantime, #acceleration, imitative #proliferation, reductionist fashion-able Ai and traditional-algorithmic content generation modalities, assistive and automatic efficient decision making, are scaling up enormously.


Back to the beginning, AI Tooling

Tools like SUNO - well-done! First of all; they are clearly related to internet music/audio data scraping and copy-paste, deep imitation and #ipinfringement a constant risk, though a well deeply wired and understandable cognitive assumption, absolutely matched by their marketing tests and their specific "Ai prompting" and the generated content proposal.

The very same music festival I mentioned, Sanremo Italian Song Festival, is full of imitative clichés, very very low novelty making and low cognitive demanding - dress and fashion design apart.

This said, I'm not even for a fraction diminishing the relevant topics of the evolving relationship between human capabilities and AI advancements, particularly in the context of how automation and AI might reshape the value of different human skills in the workforce and society. The concept of "novelty introduction" as a skill set—essentially, the ability to generate new ideas, innovate, or create original content—can be impacted by AI, but the outcomes are nuanced and depend on how we integrate AI tools into our activities.

  • Impact on Low-Level "Novelty Introduction"

Marginalization Risks: There is a valid concern that individuals whose roles are focused on tasks that can be easily automated or replicated by AI—including some forms of basic novelty introduction, like generating variations on existing templates or ideas—might find their skills less in demand. As AI becomes more capable of producing novel content within certain parameters, tasks that require basic creativity but not deep insight or high levels of originality could be increasingly performed by AI.

Complementarity and Augmentation: However, rather than simply marginalizing human effort, AI can also complement and augment human creativity. For instance, AI can handle more routine or data-intensive aspects of creative work, freeing humans to focus on higher-level conceptualization, strategy, and innovation. This synergy can enhance productivity and creativity, enabling humans to achieve outcomes that were previously unattainable.


  • Promoting Extrapolation and Abstraction

Encouraging skills in extrapolation and abstraction addresses a crucial area where AI still lags behind human capabilities: the ability to generalize from limited information, to think abstractly about problems, and to apply knowledge in novel and diverse contexts. Education and training that emphasize these skills can prepare individuals to excel in areas where human judgment, creativity, and strategic thinking are paramount.

Adaptive and Strategic Thinking: By focusing on extrapolation and abstraction, education can foster adaptive and strategic thinkers who can navigate complex, uncertain environments—skills that are invaluable in a rapidly changing world and less susceptible to automation.

Innovation and Creativity: These skills are also at the heart of innovation and creativity, enabling individuals to conceive of entirely new solutions, ideas, or artistic expressions that AI cannot replicate, given its current limitations in understanding context, emotion, and the subtleties of human experience.

Interdisciplinary Learning: Encouraging an interdisciplinary approach to learning (and to human/machine co-learning practices) can further enhance the ability to abstract and extrapolate by exposing individuals to a wide range of ideas, problems, and methods of inquiry. This approach fosters the kind of flexible thinking and problem-solving that is likely to remain a uniquely human advantage for the foreseeable future.


A possible conclusion, while there's a high risk that AI could marginalize roles focused on low-level novelty introduction,

there's also a significant opportunity for AI to push humans towards higher-level cognitive functions.

By emphasizing education and training in extrapolation, abstraction, and interdisciplinary learning, we can prepare individuals to work alongside AI, leveraging its capabilities to enhance human creativity and innovation rather than replace it.

This perspective not only mitigates the risks of marginalization but also opens up new avenues for human development and achievement in the AI era.


* New generations and the educational burden

there are emerging indications and indicators that youngsters problematize and put under discussion the long education process they have to go through; in western societies there's a strong request to accelerate and pragmatize the schooling process, especially in secondary schools; this is most probably the consequence of the overall tech acceleration, and the simple fact that Ai has impacted deeply the youngsters' imaginary and pragmatic landscape, before being too-shaped by pre-existing learning models, efficient or not they might be proven to equip them for the imminent life challenges.

  • In general, concerns regarding the length and effectiveness of education systems, particularly in light of technological advancements and societal changes, are multifaceted. While there's a broad spectrum of issues affecting educational systems worldwide, specific discussions on accelerating the educational process, especially in secondary schools, are intertwined with larger educational challenges and innovations.The World Bank highlights several persistent problems within education systems, particularly in low- and middle-income countries, that have been exacerbated by the COVID-19 pandemic. These include low-quality instruction, inadequate supply of teachers, and a failure to implement evidence-based or pro-equity policies. The pandemic significantly impacted student learning, potentially increasing "learning poverty" and affecting future earnings of this generation adversely.
  • Additionally, there is a noted inadequacy in early childhood care and education (ECCE), which is foundational for lifelong learning and emotional well-being.Historically, education has evolved significantly over the last century, reflecting societal, economic, and technological changes. For instance, the introduction of mandatory education laws, the provision of free public education, and reforms aimed at ensuring inclusivity and equal opportunity for all students regardless of race, gender, or socioeconomic status mark significant milestones. Moreover, the emphasis on STEM education and initiatives like the No Child Left Behind Act and Common Core standards highlight ongoing efforts to adapt education systems in Western societies to meet contemporary needs and challenges.While these historical insights and current challenges do not directly address the acceleration of secondary or university education directly, they contextualize broader discussions about the educational reform.
  • The push for educational acceleration may stem from a recognition of the mismatch between traditional educational timelines and the pace of technological and societal changes. The emphasis on integrating neuroscience into education policies, reversing learning losses with innovative models like the Escuela Nueva Learning Circles or Pratham's Teaching at the Right Level, and leveraging private sector contributions to education technology suggests a Move towards more personalized, efficient, and effective educational approaches.
  • In conclusion, while explicit calls for accelerating and deeper tech and experimental-orientation of the educational process in secondary schools may not be the sole focus, the broader educational discourse includes significant consideration of how to make education more relevant, effective, and adaptable to the needs of a rapidly changing world. This includes addressing the quality of education, equity and access, and the integration of technology and evidence-based practices to improve learning outcomes.



To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics