A Hopeful and also Scary Generative World Crafted by AI
Image drawn 100% by a human (Bernardo Crespo)

A Hopeful and also Scary Generative World Crafted by AI

Let me reflect on what I think is the most extraordinary value (and danger) of AGI. In short, this idea lies in its ability to generate reality throughout language. 

[This content has been composed by a human. AI has been involved in some of the translations. The header image is 100% created by a human].


Hypothesis #1 - Humans are linguistic beings 

We humans are linguistic beings. We create reality with language: 

  • “I declare you husband and wife”
  • “I hereby find you guilty for all the crimes you have been charged of…”
  • “I now baptize you in the name of the Father, Son, and Holy Spirit.”
  • “Inventas vitam iuvat excoluisse per artes” … “The Nobel prize for economics is awarded this year to…”


Our beliefs, our legal and contract framework, our currencies, our rules of civil coexistence, anything from our financial system to acknowledgement and recognition is based on linguistic constructs. We humans create our institutions and our stability based on language. Even our innovation methods, the way we ideate new realities starts with language: 

How Might We <insert very ambitious well-intended audacious goal> by <very optimistic but overly broad and vague approach>?


Therefore, all what we are, live, behave and socialize through is based on linguistic constructs. We generate reality through language. We are linguistic human beings.


“Language is very powerful. Language does not just describe reality. Language creates the reality it describes.” —Desmond Tutu.


Hypothesis #2 - AI-generated content outpaces human-generated content (over the last decade)

AGI is now able to create from code, to images and video, even to written or spoken natural language, anything that not a single person is able to tell apart if it was created by a human or a machine (the end of Turing test).

AI is now able to deceive a IVR customer service unlocking banking services. Not only banks, also relatives can be deceived. 

Even Google back in February 2023 has decided that it will optimize its search engine from now on based on high quality content instead of on the former practice of prioritizing human vs nonhuman-generated content. Google has given up on differentiating human-generated content from AI-generated content. 

Today's reality (especially after November 30, 2022) is mixed with AI-generated content, be it text, voice, images or videos. The only human senses left may be taste and smell, but not for long. But that's part of another conversation. In fact, most of the content we are reading, viewing or hearing right now might be generated to a high proportion by AI pre-trained models induced by natural language human’s prompts. 

If I share with you this headline by The Atlantic:”Welcome to the Internet of Thingies: 61.5% of Web Traffic Is Not Human” you will not even believe that this article was written a decade ago. And it was indeed. Imagine reality after November 2022, the launch of free and massive access to ChatGPT. And then, the rest joined this new universal AI generated party. 


“Computers are useless. They only give you the answers.” —Pablo Picasso


Possible Spin-Offs in this new Augmented-Intelligence Reality


Let me save you a long and thoughtful reflection on the possible repercussions of combining hypothesis no. 1 and no. 2. Some of you may think that these are facts and not hypotheses. I prefer to cut to the chase and pose some simple questions:

  • How far are we from political parties designing electoral programs and political manifestos via human prompting AI?
  • How far are we from judicial sentences written by AI based on an extensive research conducted by AI accessing the largest data base on the planet of previous verdicts?
  • How far are we from humans awarding co-created studies and research generated by AI?
  • How far are we from teachers assigning essays that will be composed by AI and delivered by students?
  • How far are we from ruling the world by norms and standards created by humans prompting AI tools? 
  • How far are we from assigning to the tools created on top of AI the authority we used to grant to human sages?


Maybe we have to assume that the narrative should no longer be humans vs machines, but human intelligence augmented by machines and viceversa. If we are massively entering the era of AI (Augmented Intelligence, and not artificial intelligence), what are the necessary steps to transition to this new reality in which humans will connect unexpected and unrelated realms of wisdom with questions and machines will find the answers for humans, again, to decide the course of the future. 


Here is a possible manifesto to avoid a non desirable future, a Manifesto for Responsible AI

  1. Let humans supervise the AI output (whether questions or answers) and ultimately be accountable for the output of algorithms. 
  2. Do not let humans to compete with AGI. 
  3. Let AGI do human tasks when deep computational analysis is key to find the solution. Always back to point 1. 
  4. Force humans (corporations) to be transparent on the data that feed generative AI. In other words, corpus data transparency. 
  5. Force humans using AGI to question the results and to think and reason from the opposite perspective and framework. Force AI to be critical to enrich our limited cognitive biased thinking. 
  6. Prohibit humans from using AGI to create fake news and spread disinformation. Set limits (international exemplary fines) to AGI being used for the purpose of disinformation. 
  7. Prohibit humans from using AI for war purposes. Do not allow AI to kill humans under any circumstances. 
  8. Force humans to take responsibility for AI-generated content and label any AI-generated content as such. Let's penalize and fine humans who try to hide that the contents were created with the help of AGI. 
  9. Let humans (corporations) score the result of the algorithms based on the original data sources and the human feedback that end up giving that result. If the trade-off between interpretability and performance is unsolvable, let’s choose another way to add clarity over understanding (maybe sources or magic reasoning). Always back to point 1. 
  10. Let human diversity be fairly represented in all data sources feeding any algorithm. 


For the first time we can augment human intelligence effortlessly and change the course of history by prompting a machine to rewrite history. What history would be like if it had been told by the oppressed (indigenous people) and not by the oppressor (colonizers). For the first time we can elaborate complex reasoning based on fairer assumptions without much of an effort, what would be the best application of quantum computing under the prism and for the purpose of safeguarding the planet? How to regulate AI in order to avoid AI competing with humans and force any algorithmic solution to be always under human supervision?


The power to question is the basis of all human progress." —Indira Gandhi.


Some of these questions are the basis for a more desirable future, and we can now do so effortlessly. AI will not form a union or go on strike when forced to work 24/7. We should only be concerned with carbon footprint generated by the use of AGI, the social impact of egalitarian access to AGI and the governance of the use of AGI. Not a menial task but now we gained some idleness . In the meantime, we humans can spend our time thinking of better ways to save this planet and generate a more sustainable drift for future generations. Ultimately, it will be the quality of our intention that preserves or destroys our existence. Has it not always been so?


By way of illustration, the following table has been constructed to force GPT-4 to face the eight examples of AI risks detected by the Center for AI Safety published May 30, 2023 and creatively prompting it to find solutions in advance.

It all depends on the quality of our intention. Do not blame the enabler.

No alt text provided for this image
Source: Own Elaboration by Bernardo Crespo based on personal propmting on GPT-4 taking "8 Examples of AI Risk" by Center for AI Safety (June 2023)


Data and technology are just mere change enablers. We, the people, should lead that change. 

Perhaps we should start by gathering human wisdom to set the north star and coin our vision for the future:

Quote #1 “Computers are useless. They only give you the answers.” —Pablo Picasso
Quote #2 “The power to question is the basis of all human progress." —Indira Gandhi.
Quote #3 “Language is very powerful. Language does not just describe reality. Language creates the reality it describes.”—Desmond Tutu.


Sources:






No alt text provided for this image

Bernardo Crespo is an entrepreneur, startup investor and also a venture builder advisor to various data-driven companies. He is Academic Director of the Digital Transformation Management Program at IE University (Executive Education) for the last eleven editions of the program.

Previously he was Digital Transformation Leader at Merkle Spain and also Head of Digital Marketing at BBVA in Spain where he led a data intensive initiative based on Gamification mechanics that was a case study by prestigious technology firms such as Gartner and Forrester. He studied the last year of his undergraduate degree in Business Administration at the University of St Andrews in Scotland, graduated from UCLM in Spain as BBA, and is also a certified ontological coach by Newfield Network. Bernardo lives in Spain, where he has been recognized as one of the top 50 influencers in Digital Transformation by Expansión newspaper in 2016.

He is co-author of the book "The Data Mindset Playbook: A Book About Data For People Who Don't Want To Read About Data".

Buket Bas

From abstract to concrete!

1y

As long as profit remains the main driver, ideals remain hard to achieve

To view or add a comment, sign in

More articles by Bernardo Crespo Velasco

Insights from the community

Others also viewed

Explore topics