AI: Understanding Its Boundaries and Possibilities with Real-World Examples

AI: Understanding Its Boundaries and Possibilities with Real-World Examples

Artificial Intelligence (AI) has been at the forefront of our minds for the last couple of years, redefining many people’s interactions with technology. However, as AI becomes increasingly woven into the fabric of daily life, misunderstandings and myths proliferate.

While undoubtedly a game-changing technology, the potential of AI won’t be properly utilised without a concrete understanding of its abilities. This piece aims to demystify AI by clarifying what it is decidedly not, providing a clearer picture of its capabilities and limitations with real-world examples:

AI ≠ Infallibility

Contrary to the notion that AI is flawless, it is prone to errors and biases. For example, AI vision systems occasionally misidentify images, underscoring the indispensability of human oversight. A notable case occurred in 2020, in Detroit, Michigan, where Robert Williams was wrongfully arrested for a shoplifting incident after facial recognition technology mistakenly identified him as the suspect from surveillance footage. This incident underscores the limitations and potential biases inherent in AI systems, and the need for human involvement to verify AI-generated outcomes, especially in high-stakes scenarios such as law enforcement.

Moreover, AI systems can be deceived through data manipulation, highlighting their vulnerabilities. Techniques that subtly alter input data to mislead AI algorithms highlight the importance of robust security measures and ongoing research to mitigate such threats, further challenging the notion of AI as infallible and emphasising the indispensability of human oversight.

AI ≠ Human Intelligence

Despite its advanced capabilities, AI lacks the essence of human intelligence—consciousness, emotions, and self-awareness. For instance, while AI models such as GPT-3 can generate human-like text, they operate without a genuine understanding or subjective experiences, relying solely on data patterns and predefined rules. Though Large Language Models (LLMs) have produced some brilliant results, human intelligence is still essential for ensuring its outputs are edited, correct and on point. AI is an ideal assistant or co-pilot, but it must be supplemented with human expertise.

AI ≠ Autonomy

AI systems require human intervention for training, monitoring, and refinement. Even seemingly autonomous technologies, like self-driving cars, necessitate human involvement to navigate complex and unpredictable scenarios on the road.

Tesla's Autopilot system is a prime example of advanced driver-assistance technology that showcases the limits of AI autonomy. While it offers features like auto-steering, traffic-aware cruise control, and automatic lane changes, Tesla consistently states that its Autopilot and Full Self-Driving (FSD) capabilities require active driver supervision and are not fully autonomous. Despite this, the reported numbers by the National Highway Traffic Safety Administration (NHTSA) suggest some drivers are treating it as fully autonomous, sometimes resulting in tragedy. Sadly, a total of 17 fatalities and 736 crashes since 2019 (NHTSA).

The dream of fully autonomous vehicles is still a work in progress and will continue to require human involvement in both development and real-time supervision for the foreseeable future. It’s a fascinating field that continues to evolve, but meanwhile, drivers must understand the capabilities and limitations of these systems to ensure safe operation.

AI ≠ Bias-Free

AI can inherit biases from its training data, as seen in some facial recognition technologies displaying racial or gender biases. One of the most cited examples of bias in AI facial recognition technology comes from the Gender Shades project led by Joy Buolamwini at the MIT Media Lab. This project evaluated the performance of commercial facial recognition systems from companies like IBM, Microsoft, and Face++. The study found significant disparities in the accuracy of gender classification across different genders and skin tones. Specifically, the systems performed best on male faces with lighter skin and worst on female faces with darker skin. This research underscored the biases present in AI systems due to imbalanced and non-representative training data.

Another example involves Amazon's AI recruitment tool, which the company scrapped after discovering it favoured male candidates over female candidates for technical roles. The AI system was trained on resumes submitted to the company over 10 years, most of which came from men, reflecting the male dominance in the tech industry. This led to the AI developing a bias against women, automatically downgrading resumes that included words like "women's," as in "women's chess club captain," or graduates of certain all-female colleges.

These examples demonstrate the tangible impact of biased AI, affecting everything from job opportunities to the fairness and accuracy of surveillance technologies. The AI research and development community are prioritising the creation of diverse, inclusive, and representative datasets, alongside robust bias mitigation strategies to try and combat these issues.

AI ≠ Creative Genius:

AI augments human capabilities; it doesn't replace human creativity. While AI can generate art or music, it lacks the intuition and emotional depth that fuel genuine creativity.

One prominent example is the portrait titled "Edmond de Belamy," created by the AI algorithm developed by Obvious, a Paris-based art collective. This artwork, generated through a type of AI called Generative Adversarial Networks (GANs), was sold at Christie's auction house for an impressive sum. While the portrait resembles traditional classical art, critics argue that it lacks the emotional depth and intentionality that human artists imbue in their work. The AI algorithm can replicate styles and aesthetics based on its training data but often doesn’t possess the creativity or emotional engagement that comes with human artistry.

AI ≠ Clairvoyant

AI's predictive capabilities are limited by real-world complexities and unpredictability, as evidenced by the challenges in accurate stock market forecasting.

One example involves FinBrain, a company that specialises in AI-driven stock market predictions. FinBrain's AI algorithms showed remarkable accuracy in predicting the stock price movements of Bank of America (BAC) over a specific period in 2023. The algorithms were able to closely mirror actual stock price movements, indicating a high degree of precision in their predictions. Despite this accuracy, the reliance on historical data and complex algorithms underscores the challenges AI systems face, particularly in predicting unforeseen market shifts or 'black swan' events.

Conclusion

Real-world examples demonstrate AI's susceptibility to errors, biases, and dependency on human oversight, underscoring that it is neither infallible nor autonomous. Despite these challenges, AI excels as a tool augmenting human capabilities, enhancing creativity, and offering predictive insights, albeit within the constraints of data and ethical considerations. The future of AI, brimming with both promise and areas for growth, depends on our continued commitment to understanding, refining, and ethically integrating this dynamic technology into society.

From an individual or business perspective, the effective use of AI is about acknowledging its limitations as much as its capabilities, with an open, but balanced mind. As AI evolves, so too must our understanding, ensuring discussions remain rooted in reality and informed by concrete examples. This approach enables us to harness AI's potential responsibly, navigating its challenges with informed caution, and paving the way for a future where technology amplifies human potential without overreaching.

At Edgemethods, our team of experts are ready to help you unlock AI's full potential responsibly and innovatively. Contact EdgeMethods today at info@edgemethods.com for tailored solutions that bridge technology and strategy.

To view or add a comment, sign in

More articles by EdgeMethods

Insights from the community

Others also viewed

Explore topics