The Tyranny of Algorithms

The Tyranny of Algorithms

We live today increasingly under the tyranny of algorithms. They rule over us. They shape what we say and how we interact with each other. They shape behavior. They affect whether people get jobs and other essential things in life. And algorithms kill people.

Algorithms work behind the curtains, cloaked in secrecy, often unaccountable.

Algorithmic Predictions about Human Behavior

Yuki Matsumi and I wrote about the dangers of algorithmic predictions in The Prediction Society: AI and the Problems of Forecasting the Future, 2025 U. Ill. L. Rev. (2025). We argue that algorithms that attempt to forecast the future and impede human autonomy in the process.

Increasingly, algorithmic predictions are used to make decisions about credit, insurance, sentencing, education, and employment. We contend that algorithmic predictions are being used “with too much confidence, and not enough accountability. Ironically, future forecasting is occurring with far too little foresight.”

We contend that algorithmic predictions “shift control over people’s future, taking it away from individuals and giving the power to entities to dictate what people’s future will be.” Algorithmic predictions do not work like a crystal ball, looking to the future. Instead, they look to the past. They analyze patterns in past data and assume that these patterns will persist into the future. Instead of predicting the future, algorithmic predictions fossilize the past. We argue: “Algorithmic predictions not only forecast the future; they also create it.”

Quote from Matsumi & Solove, The Prediction Society

Additionally, we contend: “Algorithms are not adept at handling unexpected human swerves. For an algorithm, such swerves are noise to be minimized. But swerves are what make humanity different from machines.”

Renowned computer scientist Arvind Narayanan savagely critiques algorithmic predictions in his book, AI Snake Oil. Along with Sayash Kapoor, Narayanan argues that while other forms of AI have great promise, AI predictions about human behavior are not only inaccurate but fundamentally flawed. He contends: “Accurately predicting people’s social behavior is not a solvable technology problem, and determining people’s life chances on the basis of inherently faulty predictions will always be morally problematic.”

In another thoughtful work, The Death of the Legal Subject, Katrina Geddes argues that predictive algorithms are immoral because they are a violation of human autonomy.

Health Insurance Algorithms

The health insurance industry uses algorithms to deny people’s healthcare claims – often with devastating effects, sometimes leading to death. The industry has been accused of using denials and delays to discourage patients in pursuing claims, as only a very low percentage of people appeal claims.

A lawsuit claims that UnitedHealthcare used an algorithm with a 90% error rate to deny coverage to elderly patients for care after an acute illness or injury. The algorithm appears to systematically underestimate (by a lot) the amount of time needed for such care:

According to the Stat investigation and the lawsuit, the estimates are often draconian. For instance, on a Medicare Advantage Plan, patients who stay in a hospital for three days are typically entitled to up to 100 days of covered care in a nursing home. But with nH Predict, patients rarely stay in nursing homes for more than 14 days before receiving payment denials from UnitedHealth.

The algorithm “doesn’t account for many relevant factors in a patient’s health and recovery time, including comorbidities and things that occur during stays, like if they develop pneumonia while in the hospital or catch COVID-19 in a nursing home.”

Patients and doctors who request to learn about how the algorithm are denied access because the algorithm is “proprietary.”

This denial of access to the algorithm is typical when it comes to algorithms. In Life, Liberty and Trade Secrets: Intellectual Property in the Criminal Justice System, and other works, Rebecca Wexler discusses how companies use trade secrets to shield their algorithms from scrutiny and accountability.

Social Media Algorithms

"No longer are people like songbirds in cages, unable to link to their external content – being forced to stay within the restrictive confines of the platform."

In social media, algorithms shape discourse. The algorithms dictate the way people must post. They disfavor external links, hurting journalists and the media, with an overall effect of undermining the industry. Algorithms force people to post about certain topics so their posts don’t get downgraded, leading them to be unseen.

Algorithms encourage “engagement” which often means saying extreme things, leading to polarization, anger, and meanness. They shape discourse in negative ways.

Platforms throttle posts with external links because they want to imprison people in the platform – God forbid someone actually clicks to something outside the platform that it cannot monetize!

Recently, researchers found that Elon Musk uses algorithms at X to promote his own posts and interests and right-wing content.

The refreshing thing about BlueSky is that it presents posts without the oppressive algorithms trying to manipulate things. People are commenting how they are starting to enjoy social media again. No longer are people like songbirds in cages, unable to link to their external content – being forced to stay within the restrictive confines of the platform.

An Algorithmic Accountability Act

It is time that we see that we are oppressed by algorithms. Algorithms act as an oligarchy of tyrants, operating in the shadows without hardly any accountability.  We will likely never be totally free from algorithms, but hopefully the law can impose greater accountability and limitation on their most pernicious uses. We need an Algorithmic Accountability Act that brings the use of algorithms under greater control. Algorithmic predictions should be significantly curtailed. In many cases, they don’t work well, cause significant harms, and violate human agency. They are fundamentally flawed.

There should be much greater transparency for algorithms. But accountability means far more than transparency alone. There must be significantly greater controls on algorithms used by health insurance, credit reporting agencies, employers, and other entities that use them to make decisions that have material effects on people’s lives, health, and opportunities. The law should empower regulators to prevent instances where algorithms can lead to harm to people, such as those used by health insurance companies. And there should be liability when algorithms wrongfully harm people.

It is more challenging to regulate social media algorithms, but at the very least, there should be transparency and accountability for harm. The case of Anderson v. TikTok (3rd Cir. 2024) is a great example of how accountability can be imposed. In this case, an algorithm recommended a video about self-strangulation to a 10-year old girl, who followed the video’s instructions and accidentally killed herself. The court (correctly, in my view) rejected TikTok’s claim that it was immune under the CDA Section 230 and allowed the girl’s family to sue. The key to bringing algorithms under control is to impose accountability, something that the law has mostly failed to do. For more about the Anderson case, I examined it in depth in an earlier post. The CDA Section 230 has often been misinterpreted by courts to turn it into an unaccountability statute – a law that even encourages companies to be irresponsible. I hope more cases look to Anderson to interpret the CDA Section 230 in a more sensible way so that the law doesn’t undermine accountability.

Another key step is for people to vote with their feet and migrate to social media sites that are less manipulative. The mass flight from X to Bluesky is a wonderful development. As I wrote on Bluesky: “It is very important that Bluesky succeed. It will be a shining example that the Internet can work with a different model — civil discourse, free from manipulative algorithms, free from exploitative and privacy-invasive practices, free from link throttling, free from caging its users in a jail.”

The time is long overdue for a reckoning with algorithms.


Professor Daniel J. Solove is a law professor at George Washington University Law School. Through his company, TeachPrivacy, he has created the largest library of computer-based privacy and data security training, with more than 150 courses. He is also the co-organizer of the Privacy + Security Forum events for privacy professionals.

Professor Solove’s Newsletter (free)

Sign up for Professor Solove’s Newsletter about his writings, whiteboards, cartoons, trainings, events, and more.

The Prediction Society: Algorithms and the Problems of Forecasting the Future

Click here to download the paper (free)

Pre-Order Prof. Solove’s forthcoming book, ON PRIVACY AND TECHNOLOGY

Click here to pre-order the book

Click here to pre-order the book

Luis Marin

Strategic Leader | Engineer driving business results through people and technology

3d

Very informative

Like
Reply
Jim&Mimie Pfau

proprietor at RPIgraphics

1w

Algorithms are ruining our future. They ignore variables. Their predictions are digital whereas the world is analog. And variable! And unpredictable!

Like
Reply
Stacey Bosshardt

Environmental, Natural Resources, and Administrative Law Litigator

1w

This makes me want to watch “Minority Report” again!

Like
Reply
Andrew Ferguson

Professor Of Law at American University Washington College of Law

1w

Great post. Although in addition to the Algorithmic Accountability Act, we need to fashion a “tyrant test” to go beyond just legislative accountability (which is obviously a part of the solution). My “Surveillance and the Tyrant Test”’focuses on a sliver of the problems you raise, but it offers a framework to respond to tyranny. Envisioning a systemic change feels improbable now, but the harms of algorithmic injustice do actually cross party lines. https://www.law.georgetown.edu/georgetown-law-journal/in-print/volume-110/volume-110-issue-2-december-2021/surveillance-and-the-tyrant-test/

Timely article! I've drafted an article on this, a startling reflection of my thoughts on the matter. Lacking accountability, in my view the provider of the AI should be the verifier at the user-end of application -- BEFORE the user permits their details to be processed. As the medium between service and service user, I believe further scrutiny of business practice is warranted. This is reflective of the impact that a reliance on AI subverts regard for human life. Perhaps it's public and private businesses that need to be more responsible initially - which aligns with human rights impacts, and their use on any population in any case. Though I do feel it could be interpreted erroneously, AI is a work in progress. Daniel Solove

Like
Reply

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics