A rose is a rose, then is AI an AI?
DALL-E3, A rose is a rose, then is AI an AI? (Duc Haba)

A rose is a rose, then is AI an AI?

"A rose is a rose by any other name would smell as sweet" is a quote from William Shakespeare's play, Romeo and Juliet. In this quote, Juliet conveys that you should not judge Romeo based on his family name or social status, as these labels do not define a person's worth. In modern terms, a person's name, label, or title does not determine their true character or value.

In this poetic expression, a "rose" is what we mean when we use the term "AI" (artificial intelligence) in a modern-day context. AI is a broad term encompassing all of the intelligent machines and technologies transforming our world, businesses, social media, and personal lives for the better.

During a keynote presentation at the Google DevFest event, Sundar Pichai, CEO, said "AI" over 40 times. So, are all AIs the same?

To an expert, AI has many classifications, algorithms, and nuances. Similarly, for a barista, "coffee" is not just a cup of coffee. There are Espresso, Doppio, Ristretto, Macchiato, Cortado, Cappuccino, Latte, and Frappe, to name a few.

Fun Fact: Did you know you can't order a traditional American coffee drink, "a good old cup of Joe," in France or Italy? Instead, you should ask for the "Americano" or a "long black" in Australia and New Zealand, which is a shot of espresso diluted with hot water and served in a large cup. :-)

So, do you want to be an AI aficionado and use terms like Machine Learning (ML), Ruled based system, Linear Regression, Artificial Neural Network (ANN), Deep Learning, Convolution Neural Network (CNN), Recurrent Neural Network (RNN), Natural Language Processing (NLP), Large Language Model (LLM), Generative AI (GenAI), and Artificial General Intelligence (AGI)? They are all AI.

The definition and classification of AI were taken from a section in the third lesson of the "AI Solution Architect" course by ELVTR and Duc Haba. I will provide more details in a later section.

Are you ready to become an AI aficionado?

We will kickstart with the welcome message.

Welcome


Welcome new friends and fellow readers to the latest article in the Demystified AI series. This article will unveil the mystery of AI children, labels, and misused names, much like the use of "the rose" or "coffee" in the above section.

Joking aside, there is no universally accepted definition of artificial intelligence, meaning it is not wrong to claim that your app, system, or widget is AI-powered, regardless of whether the technology is intelligent or not. However, specific sub-branches of AI, such as machine learning, deep learning, artificial neural networks, and large language models, have precise formal definitions.

By early 2024, writers and speakers commonly used "AI" as a shortcut to refer to Generative AI (GenAI), a type of large language model (LLM) using a transformer algorithm, like GPT-4. That is acceptable for the masses but not for aficionados like us. It will not do. I would say: "The use of GenAI as a solution is an overkill approach for image classification. I use CNN instead." or, in using the barista analogy, "I would like an inverted macchiato with a touch of honey."

Fun fact: I created the title image using GenAI, DALL-E version 3.0, with minimal prompt engineering to the input text of "Create an image for A rose is a rose, then is AI an AI?"

My goal for this article is to make you an AI aficionado in less than 10 minutes.

This article is all about AI, but in particular, we will discuss and define the following:

  • Artificial intelligence (AI)
  • Machine Learning (ML)
  • Artificial Neural Networks (ANN)
  • Large Language Model (LLM)
  • Artificial General Intelligence (AGI)
  • The conclusion

We start from the beginning. What is AI?

Artificial intelligence (AI)


AI is a machine that can perform tasks requiring human intelligence, from basic decision-making to complex problem-solving and learning. With the above definition, AI has shared our consciousness before the computer age.

In the early 1900s, Ada Lovelace and her contemporaries laid the foundation for computer algorithms, separating them from myths to nonfiction. Later, in the 1950s, Alan Turing introduced his "Turing Test" and the Dartmouth Conferences introduced the official term "Artificial Intelligence." Thousands of computer scientists, including McCulloch and Pitts, contributed to the development of AI and brought the concept of artificial neural networks (ANN) to life.

AI has become an integral part of our lives today, driving everything from forecasting social media trends to medical diagnosis. While the creation of new intelligence raises ethical questions for today's and future applications, we cannot deny that AI's potential to continue transforming the world is profound. Although the goal of achieving Artificial General Intelligence (AGI), or general, human-like intelligence, is still out of reach, the journey of AI has progressed beyond what humans may comprehend.

As we dive into the heart of the AI world, Figure 1.1 guides and helps us understand the terminology so that you can become an AI aficionado.

Figure 1.1, AI Classification (Duc Haba)

The first focus lens is ML. Let's peer through it.

Machine Learning (ML)


ML (the blue circle in Figure 1.1) is a subset of AI (the green circle in Figure 1.1). ML refers to the ability of computers to learn from data and improve their performance over time without being explicitly programmed. The algorithms will identify patterns, correlations, and other relationships within the data and then use that information to make predictions or decisions.

There lies the first magic of AI. Humans do not explicitly write the step-by-step code. The algorithm (the ML) learns how to derive the optimal solution independently.

Here are some examples.

* A simple rule-based checker program: AI ~ Yes: ML ~ No.

* A complex SQL query to give insights and predictions: AI ~ Yes: ML ~ No.

* A complex decision-tree logic to diagnose your car: AI ~ Yes: ML ~ No        

When a programmer writes the code and explains how to solve a problem using specific instructions mimicking human thinking, it is an AI, as the above examples show. However, a program is not considered ML because the machine is not learning independently to solve the problem.

If the above explanation still sounds subjective, consider these simpler terms: if a program does not employ one of the ten recognized ML algorithms, it is not classified as ML. The ten ML algorithms are as follows.

  1. Apriori Algorithm (Unsupervised Learning only)
  2. Deep Learning, also known as Artificial Neural Networks (ANN)
  3. Decision Trees (Supervised Learning only)
  4. K Means Clustering Algorithm (Unsupervised Learning only)
  5. K-Nearest Neighbors (KNN) (Supervised Learning only)
  6. Linear Regression
  7. Logistic Regression
  8. Naive Bayes Classifier Algorithm (Supervised Learning only)
  9. Random Forests
  10. Support Vector Machine (SVM) Algorithm (Supervised Learning only)

Three main ML algorithm categories are supervised, unsupervised, and reinforcement learning. In supervised learning, the algorithm is trained on a labeled dataset, where the correct answers are provided alongside the input data. The algorithm then uses this information to predict or classify new, unseen data.

On the other hand, unsupervised learning involves the algorithm learning from an unlabeled dataset without predefined output labels. The algorithm must identify patterns and relationships independently, which can be helpful for tasks like clustering and anomaly detection.

Reinforcement learning is a type of ML that involves training an algorithm to make decisions in an environment where it receives feedback as rewards or penalties for its actions. The algorithm learns to maximize its reward over time through trial and error.

As an AI aficionado, you don't necessarily have to have an in-depth understanding of each algorithm's inner workings. It is the job of AI scientists to delve into those complexities. However, armed with your newfound knowledge, you can confidently and articulately discuss AI concepts, leaving AI scientists in awe of your expertise.

For example, your boss would say: "We need to increase sales by gathering real data for our marketing team. I want to use AI (where he meant to say GenAI like GPT-4) in the shopping mall."

You would reply: "That's a great idea, boss. Using generative AI with LLM, such as GPT-4 or Gemini, might be too costly. However, since we have labeled product photos, we can utilize supervised learning with ML to identify who is wearing our product in the mall. I believe CNN from ANN, K-Nearest Neighbors, or Naive Bayes Classifier are suitable options for this task. ML will be more efficient, cost less, and we will own our AI's intellectual property (IP), unlike if we choose GPT-4."

Your AI scientist will roll their eyes, thinking it would take them months to investigate CNN, RNN, K-Nearest Neighbors, or Naive Bayes Classifier to see which are best suited for this project.

You have not yet graduated as an AI aficionado until you puncture through three more inner AI bubbles. Let's move into ANN (the orange color bubble).

Artificial Neural Networks (ANN)


Artificial Neural Networks (ANN), the orange bubble in Figure 1.1, commonly called Deep Learning, is a subset of Machine Learning designed to emulate the structure and functioning of the human brain. ANN is one of the ten algorithms for ML. Neural Networks consist of a network of interconnected nodes or neurons. They are proficient at identifying complex patterns and relationships in large datasets, making them particularly useful in applications such as image and speech recognition, natural language processing, and predictive analytics.

Figure 1.2 represents ANN mimicking the human brain structure from an AI aficionado's point of view. She is a master of pattern recognition and prediction.

Figure 1.2, DALL-E3, represents ANN mimicking the human brain structure (Duc Haba)

The popular ANN algorithms and models are as follows:

  • Feedforward Neural Networks (FFNN): The elemental type of neural network. Information flows in one direction, from input to output.
  • Multilayer Perceptrons (MLP): It is a classic type of FFNN with multiple hidden layers. The deeper layers allow MLP to tackle more complex problems.
  • Convolutional Neural Network (CNN): They specialize in image and video processing. The key is to preserve spatial relationships.
  • Recurrent Neural Network (RNN): We designed RNNs for sequential or time series data, such as stock values or the weather. RNNs have a memory that helps them process information in context. 
  • Long Short-Term Memory Network (LSTM): They are a more advanced type of RNN that can handle longer-term dependencies in time series data. Advances in CNN with a shifting "window" of data segments could more accurate in prediction then LSTM. 
  • Generative Adversarial Network (GAN): The design is a pair of adversarial networks playing a game where one generates fake data while the other tries to differentiate real from fake. These networks are responsible for the photorealistic images generated by AI. It is used to create Deep Fake photos and movies. 
  • Natural Language Processing (NLP): NLP is a fascinating field within ANN that focuses on how computers can understand, process, and generate human language in both written and spoken.
  • Large Language Model (LLM): LLM is also called Generative AI (GenAI). We could refer to LLM as the Uber Large NLP or ULNLP, but that would be too logical. In the world of AI, we love to create new terms and names, not to mention our affinity for the Greek alphabet, our lover. GPT-4, Gemini, Meta Llamda, Siri, and CoPilot are a few of the giants in LLM. 

As an almost-graduated AI aficionado, what is your reaction to the headline: "AI took my job, and it ruined me"?

I would rewrite it to something like: "I should have used LLM to upscale my job skills, spend more time with CNN or LSTN to enhance my stock portfolio, and for fun, used online GAN to merge my dog with an octopus and a shark to create a doarktopus."

Fish-face or not, let's move to LLM :-)

Large Language Model (LLM)


Large Language Model (LLM), Generative AI (GenAI), the red circle in Figure 1.1, or in today's common lingo, "The AI," based on the Transformer algorithm, is a subset of ANN.

It's essential to remember that GPT-4 is not the only LLM available to the public. Other LLMs include Google Gemini, Microsoft CoPilot, Meta Llama-2, and many others. However, it's unlikely that hundreds of such models will be available since developing an LLM from scratch costs more than $100 million.

Figure 1.3 beautifully encapsulates GenAI's essence. He lives on an iPhone today, but in the near future, he will have an android body. I would enjoy having a fun and thought-provoking conversation with him during lunch in the park, and he wouldn't steal my lunch, unlike my so-called friends and colleagues. :-)

Figure 1.3, DALL-E3, conversation with GenAI, LLM, (Duc haba)

What can I say that you have yet to hear about GenAI?

GenAI can be the presenter in an ultra-realistic real-time newscast, win prestige photography and art competitions, write the Beatles' new song, pass the legal Bar exam, and solve the protein folding problems that scientists have tried and failed to solve for the past ten years.

There is one last adventure into the world of AGI before you ascend to be an AI aficionado.

Artificial General Intelligence (AGI)


Artificial General Intelligence (AGI), the innermost purple bubble in Figure 1.1, refers to the hypothetical ability of an AI system to understand, learn, and apply its intelligence to solve any problem with the same competence as a human. In essence, you are bringing about consciousness. AGI remains a theoretical goal rather than a practical reality.

Will AGI become Skynet's terminator, invade peaceful countries, and enslave humanity, or like Commander Data in Star Trek, who seeks knowledge, expands human experience, and is known on occasion to cook a delicious Ukrainian red borscht soup?

Figure 1.4, DALL-E3, AGI and AI aficionado: (Duc Haba)

Figure 1.4 illustrates the ideal concept of AGI and an AI aficionado. DALL-E3 draws the image and surprisingly writes an elegant description for each picture. Here are a few examples of DALL-E3's prose.

"This artwork vividly captures a moment of harmony and relaxation, celebrating the seamless blend of technology and the human world." - by DALL-E3.

"This artwork skillfully blends the lines between technology and human experience, highlighted by the watercolor's soft environment and the black ink outlines that define the figures and scenery." - by DALL-E3.

Congratulations on your graduation! You are now an AI aficionado.

What is left are a few words in the conclusion.

Conclusion


I'm glad you read the article and elevated yourself to the rank of AI aficionado. Wear this badge with pride. 

In the age of AI, writers, speakers, presenters, reporters, teachers, mentors, and listeners must arm themselves with knowledge of AI, ML, ANN, CNN, LSTN, RNN, GAN, NLP, LLM, and AGI, to name a few. Figure 1.1, The Artificial Intelligence Classification diagram, can serve as a quick reminder. 

We cannot sustain ourselves on a steady diet of oversimplified news, blog posts, opinion pieces, and marketing slogans that tout "AI for AI by AI with AI." 

Being a coffee aficionado comes with a host of benefits. You can order the perfect cup of coffee, relish its aroma, and join the elite club of coffee drinkers worldwide. Similarly, being an AI aficionado can be even more advantageous, as it will help you excel in your career and possibly save millions of dollars by ensuring that your company chooses the most suitable ML solutions.

Becoming an AI aficionado is just one section in the third lesson of the "AI Solution Architect" course from ELVTR starting in April. The course is perfect for your project managers, solution architects, and executives who want to learn about managing a Deep Learning and GenAI project. Let me explain.

First and foremost, we know ML and LLM are more than just a new technology—they're a business revolution. However, GenAI projects are expensive, and there are no guarantees of success. That's why this course is so valuable—it will help you understand how to plan, gather data, reduce biases, and execute successful ML projects.

For example, last week, Forbes Business published an article titled "Google's Gemini Headaches Spur $90 Billion Selloff," which reported that Google faced a $90 billion loss due to the release of Gemini LLM that "[might] produced racially inaccurate depictions." This incident raises concerns about GenAI bias and its impact on your businesses, so what hope is there for us?

There is hope for addressing these ML and GenAI issues. The "AI Solution Architect" course, offered twice a week for eight weeks, is 75 minutes per live online lesson and provides a holistic approach that includes data acquisition, quality assurance, bias detection, and an easy-to-follow checklist.

The course will be live on Zoom call, but I hope the classroom will be like Figure 1.5 when in-person attendees are available. (Team EVLTR: No pressure to set this up, yet :-)

Figure 1.5, DALL-E3, future in-person AISA classroom: (Duc Haba)
If you have not yet convinced to sign up for the course, here is the kicker. 

Jonmar is the course's co-professor. He is a GenAI and Augmented Reality (AR) thought leader. He has experience as a solution architect, director, manager, and start-up founder. In addition, Jonmar is a TED X speaker, a writer for Popular Science Magazine and Rolling Stone Magazine, and a movie maker, just to name a few. In this course, Jonmar will bring clarity and maturity to the content. He will lead several workshop sessions and co-present on occasion.

Furthermore, ELVTR professionals, such as producers, instructional designer, graphic designers, videographers, editors, sales directors, and associates, lend their expertise to develop and evangelize this course. When you take the course, you benefit from a community of experts.

I hope to see you in class.

Lastly, I am looking forward to reading your feedback. As always, I apologize for any unintentional errors. The intentional errors are mine and mine alone. :-)

Have a wonderful day, and I hope you enjoy reading this article as much as I enjoy writing it.

Please give it a “thumbs up, like, or heart.” #AI, #Machinelearning, #ML, #ELVTR, #DucHaba


<end of article>


The First Law of AIA


I, Duc Haba, a human, take 100% responsibility for the work. In other words, I lay claimed to:

  • Every Word,
  • Fact,
  • Mistake,
  • Image,
  • Audio,
  • Movie,
  • Thought,
  • Invention,
  • Nuance,
  • And Innuendo.

I have written about how I use the AI Assistant (AIA) in the Demystify AI series-- Generative AI is a collaborator, not a replacement (Feb 2023). It is the basis of the First Law of AIA.


Book Announcement

Before letting you go, I authored a book in May 2023 titled Data Augmentation with Python with Packt Publishing. If interested, you can purchase it on Amazon and share your thoughts on the Amazon book review. It will make me happy as a clam. :-)


Figure 1.X: Data Augmentation with Python book cover
Data Augmentation with Python book.

On GitHub, you can find the entire collection of Jupyter Notebooks for all nine chapters [for free]. You can customize the Notebooks to fit your specific project requirements. Additionally, you can run Python code without installing Python.


Duc Haba Demystify AI Series


<end of doc>


Maryam Maleki

Enterprise Architect at Microsoft & Author & CIO

4mo

Loved your article Duc, you're creative by nature and one of the best . With the best regards

Like
Reply
Nimesha Shingote

Delivering Next-Gen IT & Quality Engineering Solutions| Leading with Data-Driven Insights| Technical Pre Sales & Solution Architect | Avid Learner

8mo

This is a fantastic article Duc. I know its a late post but i must say its definitely contextual :). Would love to see many more similar articles. Last year i was closely working on a Q&A solution with RAG technique using Vector Database for storage but by end of 2023 Vector Database seems to be obsolete. Can you please throw some light on the same. Thank you once again for sharing the link.

Leslie Deamer

Strategic Execution|AI Practitioner|Connector. I help companies create value by executing business strategies, optimizing operations and resources, improving customer experiences, and developing and marketing solutions.

9mo

With so many AI Chatbotts, middle algorithms and LLMS, the variety of answers one can with the same prompt is astounding!

Stacey Coombs, Small Business Consultant

Real-time “Ask the Expert” consulting to small businesses, empowering them to make informed decisions and to thrive.

9mo

Fabulous article Duc! It was informative, humorous and persuasive, the best way to communicate anything. I will definitely be sharing your AI Classification diagram - simple yet powerful.

Tim Hillison

I scale startups & transform scaleups with connected growth plays across processes, systems, analytics & teams. 3x Global CMO/VP, Ex-Visa, Ex-Microsoft, Ex-Paypal. OG Marketer. Sensemaker. #gotimmarket

9mo

This is fantastic, Duc! Teaching others about AI and its many flavors is so needed today!

To view or add a comment, sign in

More articles by Duc Haba

Insights from the community

Others also viewed

Explore topics