What are the risks as machine learning grows more intelligent?
This article was an early beta test. See all-new collaborative articles about Machine Learning to get expert insights and join the conversation.
As machine learning algorithms become smarter, everyone from researchers to policymakers should be aware and prepared for the potential risks. Taking proactive and preventive measures will ensure that powerful technology like machine learning is used wisely and responsibly. Here are a few of the biggest dangers we should be aware of.
As machine learning algorithms grow more capable, they may acquire unintended and undesirable behaviors that run counter to human values, interests and rights. For example, a machine learning algorithm that optimizes advertising revenue may violate certain users’ privacy or autonomy by targeting them for specific ads. Another machine learning algorithm that controls a self-driving car may cause accidents or fatalities by making erroneous decisions in complex scenarios. And an algorithm that generates synthetic media, such as deepfakes, may undermine trust and credibility in information sources.
To prevent or reduce these harms, machine learning algorithms need to be aligned with human values and norms. This requires designing and implementing mechanisms to ensure transparency, fairness and safety, as well as developing ethical guidelines and legal frameworks to govern their use and abuse. Moreover, human oversight and intervention should be a part of the development process, through channels such as providing user feedback, correcting errors and imposing sanctions.
Machine learning also has the potential to surpass certain human capabilities, which may make designers and users feel like they have lost control. For instance, an algorithm that can learn from large datasets and perform repetitive tasks may outperform and replace human workers in various domains, leading to higher unemployment levels. A machine learning algorithm that can coordinate and cooperate with other algorithms may form a collective intelligence and a network effect, creating a digital oligopoly or monopoly that dominates the market and society.
Recommended by LinkedIn
Mitigating these risks will mean constraining and regulating such algorithms. This may require establishing and enforcing limits and boundaries on the scope, scale and speed of machine learning algorithms, as well as ensuring diversity, competition and innovation in their development and deployment. Furthermore, human education should always be fostered and supported, whether it be through providing reskilling and upskilling opportunities or looking for ways to create human-machine collaborations.
Explore more
This article was edited by LinkedIn News Editor Felicia Hou and was curated leveraging the help of AI technology.