Social media algorithms. Is it your own opinion, or have we been brainwashed?

Social media algorithms. Is it your own opinion, or have we been brainwashed?

In today's age of social media, everyone in the world has access to some sort of social data feed. Some abstain by choice, but 97% of people in the world have access to and regularly access customised social media feeds. These feeds are what help us form our opinions. We are treating them like a news feed from a regulated source, when in fact, is pretty unregulated.

How are these feeds generated??

It is data generated by the social media algorithm. It is data personally generated to keep you reading, keep you addicted to the content being provided, to keep you coming back, time and time again.

The platform monitors your clicks, whether you pause on an article through your scrolling, whether you click on something that interests you, and all of this information is sent into the algorithm to be stored and processed.

Data is also shared between platforms, a Google search or joining a Facebook group, will affect your Instagram or TikTok experience.

The algorithm then generates a feed for the user. This may contain adverts tailored to the user of products that they would like to purchase, political opinions designed to elicit an emotional response (anger is the most emotive), or fun content designed to stimulate the user's endorphins (fun, happy content).

The Filter Bubble

To retain the user and not lose their interaction, the algorithm then delivers content that confirms the user's political and social beliefs.

The author Eli Pariser wrote a book on this very subject called "The Filter Bubble - What the internet is hiding from you"

The algorithm is effectively shielding the user from any data that may contradict the user's current beliefs. For example, if the user is anti-immigration, then the algorithm will deliver content designed to elicit anger about the current immigration policies in their region.

Sometimes the content is strong enough to create a 'call to arms' where the user is incensed enough to create a placard and go protest in person.

The reason anger is the emotion elicited is that this is the strongest emotion and is designed to ensure the user will continue to click and delve into the content provided instead of ignoring and moving on.

So is it even my opinion?

The only true way to form an opinion is to take equal data from the opposing two sides of an argument and use both sides to form an opinion. This is the way most judiciaries work, a courtroom jury will be presented with both sides of an argument and be allowed to form an opinion/judgment based solely on the data provided. It's why it's very important to have no outside bias introduced into these court cases.

With social media algorithms, there is no countering data, everything delivered to the user will be content designed to further compound their current beliefs, and not challenge them.

This, in turn, is now creating societies of black-and-white opinions, of red-and-blue opinions, where people support Team 1 and are not willing to listen to the arguments of Team 2, and Team 2 vice versa.

More and more it's becoming clear that the opinions of users are not their own, rather than the forced content that is being pushed into their 'Filter Bubble'.

War.

Recent conflicts have been the epicenter of the social media algorithm. The Russian special operation into Ukraine, the Israel/Hamas conflict.

Globally, individuals are consuming content on these conflicts and are forming "one side or the other side" alliances/allegiances with no middle ground or even the willingness to understand the plight of the other side. The Israel/ Hamas conflict is a prime example of this with pro-Palestine protests being organised and attended by hundreds of thousands of people globally. These protests could have the impact of pressure on the government to impact policy and potentially the outcome of the conflict.

Perhaps the algorithm should be written to avoid setting opinion or delivering algorithm-generated content to users when it comes to conflicts.

When travelling recently I noticed a different sort of media being delivered to all of my social media feeds. Usually, it is content for the United Kingdom (where I reside), but during my travel through the middle east I noticed my feeds definitely took a turn towards trying to guide my attention towards current affairs for that region in a completely polar opposite direction than when I was at home. This is partly my reason for writing this article, as this complete turn in direction took me aback.

Sentiment Analysis

I perform a lot of Social Media analysis, and these types of polarising opinions actually help when it comes to investigations.

When users are engaged by the algorithm and are incensed to retweet, forward, and engage with opinion, it enables analysts to discover the data on a massive scale because the data is so prolific and the sentiment is strong.

A typical query could look like this:

"Show me all of the users who strongly agree with the messages from the Pope"

"Show me all of the users who strongly disagree with the pope"

"Show me all of the users who strongly disagree with the pope, and who would be likely to create or attend an anti-pope demonstration"

"Show me all of the users in the above group who have violent tendencies, lone wolf tendencies, or who have been posting or interacting with images of guns/knives in the last 5 years"

Utilising this data in such a way would enable law enforcement to seed out the lone wolf attackers, the protest organisers, the protest attendee, and even gain valuable data about the size/location of the organisations that are planning attacks.

Encouraging fair and even content.

I think the world was not prepared for the impact that the social media algorithm would have on political landscapes. In the past, if a group of people had an opinion about a specific event, then that opinion would have been limited to their small social, or local groups. Something that would have, in the past been affected by which newspaper they consumed or which friend group they frequented.

Now, people thousands of miles away from a conflict can form opinions, effect change, and influence others.

The onus is on the social media companies to re-write their algorithm in a manner that takes into account information from both sides of a conflict and delivers both sides of the conflict enabling users to form their own balanced opinions. The problem is that the existence of the algorithm is hardly known, let alone movements to pressurise the social media giants to change it. At the moment, it works in their favour by teasing users to return to the content, and delivering targeted adverts to the users, which can be charged at a premium.

My final point is this:

People need to be made aware of the potential biases embedded in algorithms so that they can be given the option to ACTIVELY seek out sources of diverse information ensuring a more well-rounded perspective.

What do you think? Do you have your own opinion?

If you have managed to get this far into my article, then I thank you very much, I hope I have engaged you and I hope this article has impact on you.

I try to remain impartial in all of my content, so please let me know if you think this was bias in any way towards a direction and I will happily edit.

So what do you think of this article? Do you think you have your own opinion? or do you think your opinion has been formed through the content that has been delivered to you via various social streams.

If you enjoyed this article, please feel free to share it to your social streams (try not to influence opinion though!))



Joseph Courtesis

Sr Vice President 3SI Security Systems- Law Enforcement Division Founder- JCour-Consulting

1y

Great read. Thanks Sam

To view or add a comment, sign in

More articles by Sam H.

Insights from the community

Others also viewed

Explore topics