The Evolution and Ethical Imperatives of AI
Artificial Intelligence (AI) has undergone a dramatic evolution, transforming from simple algorithms to complex systems capable of performing tasks that once required human intelligence. This journey, akin to a technological “Cambrian explosion,” has significant implications for various industries, particularly in visual recognition, healthcare, and sustainable practices. As we advance, the ethical considerations surrounding AI become paramount, ensuring that these powerful tools benefit society without compromising our values.
The Evolution of AI:
I've attached some charts from my book. Essentially I make a few points:
a) IQ, EQ, AQ - machines have surpassed humanity in IQ, even EQ (arguably), but it's safe to say AQ (actionable quotient) is the only agency (reserved) left for humanity
b) Trust is the new/only Currency of the Exponential Age | Age of AI | Experience Economy
c) We must be Long-And not Short-Or (Humanity, AI) ; for a glimpse what those futures look like, here’s a fun movie 🍿 Atlas by J-Lo https://lnkd.in/g_z9YpWt
d) HX = CX + EX.
Whether it's surpassing human-level intelligence, AGI or even ASI - it will be akin to the birth of a new (#AI) species. This Humanity is not nearly ready for! We have to buckle up for short-term Turbulence; long-term Abundance 🎢 We're hurtling towards AGI and ASI, like it or not. It's human (and how we curb) behaviour/ greed that will set us on a trajectory (The Fork) towards the StarTrek vs MadMax futures in the diagram above.
The transition from AGI to Artificial Super Intelligence (ASI) could bring about unprecedented changes. I.J. Good’s concept of an ultra-intelligent machine—one that surpasses all human intellectual activities—emphasizes the potential and risks of ASI. This transition might lead to self-replicating robots and an industrial explosion, potentially doubling global GDP annually. However, it also raises concerns about the loss of human control and trust in AI systems.
AI’s development mirrors significant evolutionary milestones. The early days of AI saw small datasets like Pascal VOC and Caltech101, which were foundational but limited in scope. The introduction of ImageNet, with its 15 million images and 22,000 categories, marked a revolutionary step, enabling the training of more sophisticated models. Convolutional Neural Networks (CNNs) and advancements in deep learning have further propelled AI’s capabilities, particularly in visual intelligence.
Advancements in Visual Intelligence:
The journey from hand-designed features to deep learning models has been transformative. Early methods like Support Vector Machines (SVMs) and random forests laid the groundwork for more advanced techniques. Today, Scene Graphs and Visual Genome datasets allow AI to understand relationships between objects, enabling zero-shot learning and complex inference tasks. These advancements are crucial for applications in healthcare, where AI can identify anomalies and assist in patient care.
Confronting Super-Intelligence
Addressing the ethical implications of super-intelligence involves solving complex challenges:
• Alignment and Safety: Ensuring that AI systems are aligned with human values and incentives is critical. Current alignment techniques may not scale to superhuman AI systems, making it essential to develop robust methods to control and trust these systems.
• Regulation and Governance: A responsible scaling policy (RSP) is necessary to regulate AI development. This includes mitigating misuse risks and ensuring autonomous AI behavior is predictable and safe. Voluntary self-regulation by AI labs can set a precedent for industry standards.
Scaling of AI
AI models have seen rapid advancements in capabilities, from GPT-2 in 2019 (comparable to a preschooler) to GPT-4 in 2023 (a smart high schooler). However, Yann LeCun suggests that the underlying architecture of transformers might not suffice for achieving AGI. As we scale AI, the concept of Automated AI Researchers/Engineers working in parallel could lead to an “explosion of intelligence,” as Gary Marcus warns that current models may stagnate without significant breakthroughs.
Advancements in algorithmic efficiencies, such as reinforcement learning from human feedback (RLHF) and the Mixture of Experts (MoE) models, are crucial. Sam Altman’s efforts in raising substantial funds for compute resources underscore the importance of these efficiencies. However, as we continue to scale AI, we encounter the “data wall”—a potential bottleneck due to limited data for pretraining larger models.
Ethical Considerations in AI:
As AI systems become more integrated into our daily lives, addressing ethical concerns is critical. Bias in AI, especially in visual recognition, can perpetuate harmful stereotypes and discrimination. It is essential to develop algorithms that mitigate these biases and ensure fairness. Privacy is another significant concern; techniques like homomorphic encryption can help protect personal data while still enabling AI to perform its functions.
AI models must not only be powerful but also trustworthy and transparent. Leopold Aschenbrennen highlights the importance of situational awareness and the potential for AI models to reiterate and improve themselves autonomously, akin to the basis of the Matrix movie. This self-improvement capability poses ethical challenges, especially concerning the transparency and interpretability of AI decisions.
Recommended by LinkedIn
AI in Healthcare:
Healthcare and education are easily two sectors where AI has the most potential for paradigm shifts.
With the intelligence explosion unlocked by AI, imagine millions of post-doctorate level researchers (agents) accelerating drug discoveries. Case in point DeepMind’s AlphaFold has made significant contributions to revolutionised healthcare in the field of biology by predicting protein structures with remarkable accuracy. This breakthrough has vast implications for understanding diseases and developing new treatments.
AI has the potential to address labour shortages in healthcare by augmenting human capabilities. From monitoring patient vitals to identifying procedural errors, AI can significantly improve healthcare delivery. Fei-Fei in one of her more recent lectures shared that Ambient and Embodied AI are emerging fields that promise to transform patient care by providing real-time assistance and monitoring. I was blown away by a factoid she shared: deaths from hospital infections (doctors, nurses) going from room to room without washing hands is 3X that of car accidents in the US! Education is the other no-brainer where AI can change lives. Millions of lives. Think of AI educators, or AI-powered educators in sub saharan Africa. AI paves access to affordable (and more efficient, personalised) education where it’s needed most.
AI in Sustainability and Energy Consumption:
Besides data (and we;re getting better at synthetic data - the AlphaGo example where the model got trained by playing vs itself is one such example),I think energy consumption would be another natural constraint in the current AI race to the top. Were taxing current grids beyond means. By optimising energy use, integrating renewable energy sources, reducing emissions, and improving agricultural efficiency, AI can contribute significantly to building a more sustainable future.
In agriculture, precision farming techniques powered by AI can minimise resource usage and enhance crop yields, contributing to more sustainable food production.
When it comes to optimising our energy consumption curves AI could do a few interesting things:
According to PwC’s research, AI could contribute up to $15.7 trillion to the global economy by 2030, with significant potential in sustainability efforts. AI can improve energy efficiency, reduce waste, and enhance the management of natural resources. For example, AI-driven systems can optimise energy grids and predict maintenance needs, thereby reducing downtime and energy loss. PwC’s studies also highlight the role of AI in supporting the transition to renewable energy sources and improving the efficiency of logistics and supply chains.
AI’s rapid evolution brings both opportunities and challenges. Ensuring that AI systems are designed with ethical considerations from the outset is crucial to harnessing their full potential while safeguarding human values. Transparency in AI decision-making processes builds trust and allows for accountability, which is essential in mitigating risks associated with AI deployment
"AI is like 24th century technology crashing down on 20th century governance" - Ajeya Cotra
Regulating AI
Thinking about AGI might be scary, but for now, tools like OpenAI's ChatGpt makes it less scary; it's a tangible tool. There's real impetus to make safe-tech or tech safe, to build safe products. And this is an ongoing dialogue with society. When it comes to safety and regulation, for now, analogies can be drawn from airlines and automakers. It's good 'enough' and society tends to 'accept' inherent risk levels.
There's a narrative where AI development is akin to the Manhattan Project - and ironically, it was the hydrogen bomb after the a-bomb, that was more destructive. It's kinda like once we get to AGI, it will be less than a year or two before we hit ASI - it's a double exponential, from where we are now (Gpt2, 3 and 4o in 2 years - from a high-schooler level intelligence to a very capable undergrad student today with 4o).
The beginnings of a voluntary self regulation ; the race to the top as Anthropic's CEO Dario Amodei puts it - might start with their very own Responsible Scaling Policy (RSP). The fact that others have followed suit is a positive sign that hopefully some kind of consensus emerges. And when self regulation permeates, there's going to be general consensus on how to make AI safe (say 80%). The regulators enforce and step in to close-gap the 20%. Things like the
EU AI ACT and California Bill (has similar structures to RSP) are still being worked out - the details. And a lot of this depends on details. Regulation should be the last step in a series of steps. The onus is on the builders to handle things that are not anticipated in RSP. For example when anomalies are detected, the RSP needs to be updated (and policies) to handle these new anomalies. Therefore the RSP needs to be flexible to adapt.
The Future of AI and Societal Impact:
Looking ahead, the potential for Artificial General Intelligence (AGI) poses both opportunities and challenges. While AGI could revolutionise industries and drive economic growth, it also raises concerns about control and safety. Ensuring that AI development aligns with ethical guidelines and regulatory frameworks is essential to navigate these challenges.
The evolution of AI is a testament to human ingenuity and technological progress. However, as we continue to push the boundaries of what AI can achieve, it is crucial to remain vigilant about the ethical implications. By fostering a culture of responsible innovation, we can harness the power of AI to create a better, more equitable future.For the first time in (human) history, we’re dealing with super-intelligence, alien-intelligence, basically an intelligence that can and will surpass ours in the next few years. Humanity is not nearly ready for the birth of a new species and we need to come together in a race to the top (not bottom)!
As AI continues to evolve, it is up to us, as developers, policymakers, and users, to ensure that its development is guided by ethical principles. Let us commit to building AI systems that are not only intelligent but also fair, transparent, and maximises the core (good) human value set.
Cybersecurity/AI/ MS Consultant - Senior BDM & Regional Manager ACT
2moWhen Prometheus stole fire from the gods, there were unintended consequences. He thought by bringing humans a tool he would make their lives better in every sense. But powerful technology must be used responsibly. Giving humans fire also meant giving humans a moral choice: to use the tool for good or for evil. 😉
Social Innovation, Diverse Thinking, and Technology-Based learning in the 21st century. Using Ikigai 2.0 to develop a consistent voice and tone for your newsletter.
5moGreat content!! It's a lot to digest. There are a lot of AI warning signs. It's interesting how we have to create guard rails for ourselves. No organization or individual to hold accountable. Just ourselves. The story isn't that exciting when there is only one main character and the character is us. However, we must take responsibility.
Responsible AI | Certified AI Governance Professional (IAPP) | Ultra-Runner
5moExcellent piece, Dr. Luke Soon. Your point about aligning AI systems with human values is critical, yet often overlooked. The low public awareness of AI ethics underscores the necessity of articles like yours. It's crucial that we continue to educate and engage more people in these discussions as AI technologies rapidly advance. Thank you for helping to shine a light on this vital topic. #AIEthics #ResponsibleAI
Fascinating insights on the progression of AI, highlighting the crucial role of ethics in its integration with human-centric values.