The exponential growth of data: AI and ethics, legal aspects, and social implications—part 4/5
This article is part of the “The Road to Sustainability™” weekly review on Linkedin. You can receive the newsletter each Monday in your inbox with more access to analyses by subscribing here.
If you appreciate this weekly roundup, you can tweet it out ● share it.
⚠️ You’re on the free list. For the full experience, please consider becoming a paying subscriber (enjoy the spring sale ending soon: 35% off!).
As always, thank you for allocating some of your wealth to read this review.
🌱 Yael
—
The exponential growth of data is a 5 article series: our roadmap for your strategic postures and moves:
🧵 1. The exponential growth of data: an overview—part 1/5 (please read it here);
🧵 2. Build strengthened collaborations—part 2/5 (please read it here);
🧵 3. Bolster multidisciplinary research —part 3/5 (please read it here);
🧵 4. AI and ethics: legal aspects, and social implications—part 4/5 (current);
🧵 5. Develop large-scale, secure, privacy-preserving, shared infrastructures—part 5/5.
This article is part of the “The Road to Sustainability™” weekly review on Linkedin. It is now followed by 12,500+ subscribers and counting, including Fortune Global 500 companies, from all industries and sectors, governmental and non-governmental agencies, VCs, fast-growing startups, and entrepreneurs from all around the globe.
—
Table of content
- Introduction
- The technological revolution is unstoppable
- Artificial intelligence ethics and safety in a nutshell
- Visual of the week
- Endnote: does AI make strong tech companies stronger?
—
Article summary
The ethical implications and moral questions that arise from the development and implementation of artificial intelligence technologies are pushing more power in the paradigm shift from labor to capital. And if policies do not adjust to reflect this, a large part of our society will be lost in transition. Every technology revolution in history is a revolution precisely because of how much value it creates. We are no closer to creating a general AI, yet tremendous advances are made in "specific" AI fields. And long is the road to see systems that will be designed around human needs.
—
Introduction
Both within my work at Nevelab Technologies and my mission at the IEEE Global Artificial Intelligence Systems (AIS) Well-being Initiative remind me every day about the magnitude of the socio-economic challenges that are occurring than most people cannot imagine. The pandemic holds critical ethical, legal, and political lessons for the climate crisis and overall human evolution, but they must be taught.
In a previous post, we claimed that "The post-pandemic environment will be characterized by an increasingly complex set of pressures and new demands from various perspectives and radical uncertainty about the future." More tech giants commit to foster sustainability at scale by announcing the erasing of their carbon footprint, unifying their practices, partnerships, and products towards actionable missions.
The increasing demand for constant developments in the most prominent industries as finance, healthcare, real estate, energy, climate change, where artificial intelligence and machine learning became fundamental in discerning patterns in data and the impact of the tech on humans, is playing a vital role in free societies¹. And given Artificial Intelligence’s rapid adoption in high-stakes domains, ethics, and governance, with vast implications for the economy and politics, we need to use qualitative indicators to measure impacts use quantitative live-cycle data.
Get more insights and practical use cases, I invite you to learn more here.
—
The technological revolution is unstoppable
As I said in a previous review, “Change of magnitude is never easy, and human rights, immigration, philanthropy, and environments will be deeply challenged. We will need to implement precise standards and define new regulations backed up by new governance models, training, monitoring, engineering requirements, and compliance. We will also need to demonstrate integrity and show competence by continuously learn and grow."
A growing number of organizations are considering artificial intelligence to create scalable solutions—and leveraging data exponential growth—but they’re also scaling their reputational, regulatory, and legal risks.
If entrepreneurs, investors, and policymakers work together to address today's toughest challenges, the future looks promising. But one thing is certain: the developments we desperately need will lead us in a very different direction than the consumer internet and social media boom, which is coming to an end. Let's consider the pandemic as a pivotal point: the explosion of biomedical inventions that have been accelerated may well impact the future. Moreover, the open-source community's implication during the pandemic's initial phase widely proved the benefits of developing open-source software solutions to address global challenges, support rapid adoption, and overall extend network effects capacity.
"Huge models, large companies, and massive training costs dominate the hottest area of AI today, NLP."²
—
Artificial intelligence ethics and society in a nutshell
The area of AI ethics arose primarily in response to the variety of individual and societal harms resulting from the misuse, violence, bad design, or unintended negative effects of AI systems. Most AI ethical frameworks cannot be concretely implemented, as researchers have consistently demonstrated. A complex set of issues exist at the intersection of AI development and application uses. Some of the main areas include health and well-being, education, humanitarian crisis mitigation, and cross-cutting themes such as data and infrastructure, law and governance, algorithms, and design.
Here are the headlines of the table provided by "The ethics of artificial intelligence: Issues and initiatives," a study led by the European Parliamentary Research Service³ that represents some of the most consequential forms that these potential harms may take:
- Bias and Discrimination
- Denial of Individual Autonomy, Recourse, and Rights
- Non-transparent, Unexplainable, or Unjustifiable Outcomes
- Invasions of Privacy
- Isolation and Disintegration of Social Connection
- Unreliable, Unsafe, or Poor-Quality Outcomes
Without a dramatic increase in developing more insightful solutions, adopting specific knowledge, and deploying corresponding AI frameworks, the world may face more challenges. Moreover, without consistent high-level guidance, technical teams cannot simply uphold the deployment.
As it was elaborated in the previous review "Bolster multidisciplinary research—part 3/5 (please read it here)," the crucial element is that AI-based models enable ethical design system needs to be managed differently:
1. Develop enforceable guidelines in collaborations with institutions, cross-market industries, and universities around ethics, transparency, autonomous systems, and applicability.
2. Encourage significant multidisciplinary relationships between researchers and data scientists to co-deploy secured centers supplemented by administrated collaborative infrastructures and enable monitoring outputs and enforcing privacy and ethics principles.
"The best way to increase societal wealth is to decrease the cost of goods, from food to video games. Technology will rapidly drive that decline in many categories. Consider the example of semiconductors and Moore’s Law: for decades, chips became twice as powerful for the same price about every two years."—Moore's Law for Everything by Sam Altman⁴.
Gig economy companies have initiated some efforts toward flexible benefits. Still, these remain trivial from the movement capacity that employees, consumers, and regulators can set and collaborate to create structures that would enable benefits to be handled as dynamically as jobs.
Various political and legal jurisdictions will be more and more required to determine how these principles might inform or be supported by legislation or regulation. Here are just a few of the many questions that can be raised:
- How can we efficiently measure the impact of machine learning and autonomous systems on society?
- What ethical and moral principles would the general public carry to their encounters with autonomous systems? How might those principles be better integrated on a technical level into these systems?
- What do effective governance and collaborative growth mechanisms look like between networks and the public?
- What type of ethical principles should be deployed to help determine right from wrong?
- How do we know that users agree to the terms on which they’re using products? How do we know they’re adequately informed? Is it easy for them to opt-out or deactivate the technology if they change their mind?
We are exploring some case studies and provide a clear framework of values, principles, and purpose, ethical conduct on Nevelab's Vault here. Please consider becoming a paying subscriber (enjoy the spring sale ending soon: 35% off!).
"The art of knowing is knowing what to ignore."—Rumi
—
Visual of the week
Get more insights and practical use cases, I invite you to learn more here.
—
Endnote: Does AI make strong tech companies stronger?
Machine learning is probably the most important revolution of our era. Since the foundation of machine learning is data, a massive amount of data, it is relatively common to hear the growing concerns from companies, institutions, and regulators regarding the exponential amount of data use. This revolution would generate enormous wealth. When sufficiently strong AI “joins the workforce,” the price of many types of labor (which drives the prices of goods and services) would fall toward zero.
The tech industry created a revolutionary new technology in the 1970s and early 1980s that gave governments and companies unparalleled ability to monitor, interpret, and understand all of us. For the first time, deployments that had only been theoretically feasible on a small scale were technically possible on a large scale thanks to relational databases. Today, the world is changing so quickly and dramatically that an equally dramatic shift in policy will be required sooner or later to redistribute this wealth and enable more people to live the lives they want. But we need to make these points ethical. Then we will be able to raise people's living standards higher than ever before. This means that the implementation of machine learning will become very widely distributed.
That being said, it is a generalizable technology. It's not yet generalized, not even close to being. We've seen tremendous advances that have been made in very specific AI developments like face recognition or fraud detection, but applications that have been built within are not generalized⁵. Again, both the applications currently running and the data sets existing architectures are very specific to the task they are related to.
We will explore this next week in the last review of this series, "The exponential growth of data: Develop large-scale, secure, privacy-preserving, shared infrastructures."
Please give me feedback • Subscribe • Ask me a question.
—
Resources
- The End of Silicon Valley as We Know It?—by Tim O’Reilly
- The ethics of artificial intelligence: Issues and initiatives—European Parliamentary Research Service, March 2020
- State of AI Report 2020—by AI investors Nathan Benaich and Ian Hogarth.
- Moore's Law for Everything—by Sam Altman
- Face recognition and AI ethics—by Benedict Evans
—
Disclaimer
The Road to Sustainability™ is an initiative by Nevelab Technologies and is circulated for informational and educational purposes only.
Nevelab Technologies Research utilizes data and information from the public, private and internal sources, including data from actual Nevelab open data access. While we consider information from external sources reliable, we do not assume responsibility for its accuracy.
The views expressed herein are solely those of Nevelab Technologies as of this report's date and are subject to change without notice. Nevelab Technologies may have a significant financial interest in one or more of the positions and securities or derivatives discussed. Those responsible for preparing this report receive compensation based upon various factors, including, among other things, the quality of their work and firm revenues.
Founder of The Tattooed Soul | Holistic Business Coach & Transformation Strategist
3yLesser thought is given to this side, especially the long term impact of adopting and scaling AI and ML efforts: '... but they’re also scaling their reputational, regulatory, and legal risks.'