Can the fake news battle ever be won?
Who even determines what is truth and lies?
Joe Biden was inaugurated as the 46th President of the United States on January 20th. In his address he said:
"Recent weeks and months have taught us a painful lesson. There is truth and there are lies. Lies told for power and for profit. And each of us has a duty and responsibility, as citizens, as Americans, and especially as leaders -- leaders who have pledged to honor our Constitution and protect our nation — to defend the truth and to defeat the lies."
Of course he's right. Not just for the citizens of America but for all of us no matter where we live around the world. But what is our duty to defend the truth and to defeat lies? Who even determines what is truth and lies?
In my previous post I covered the timeline that allowed fake news stories to easily spread and what enabled it. But this leads us to an obvious question, what can be done about it? In this piece I provide a viewpoint.
The tightrope walk
Let's start by making one thing clear - there is no simple solution and there is no panacea. The only solutions that will make an impact will be multi-faceted, complex, and unfortunately, imperfect. That's because society is walking the tightrope of complete free speech vs. selective censorship. Most solutions on their own will either impede free speech or be too relaxed to have any impact.
There are many issues to consider in solution making for this issue but I'm going to focus on three of them:
- Who decides what is truth?
- Who decides what you see?
- What should actually be done to the content in question and when should it be done?
I'll aim to address each one in turn and then wrap up with a proposed solution overview.
1. Who decides what is truth?
Probably the toughest question to answer in the debate is who decides what is truth? Perhaps the best starting point here is setting a principle that although news reporting can be truthful, the absolute truth is impossible to tell. There are issues of reliability - i.e. Can what was reported be verified? Then there are issues of bias. People have biases, words have biases, interpretation of words can be biased etc.
If reliability vs bias can be plotted on a chart then we can get a reasonable understanding of where news sources sit. Ad Fontes media have done just that in their Media Bias Chart which shows how original source reporting devolves into exaggeration, distortion, and subversion. They continually rate media articles and sources and update their findings.
Social media companies who are propagators of news need to ensure they can tag reliability and bias (lets call it an RB score). This should be used as a marker for deciding what content should be allowed, tagged, or removed.
The least onerous way would be to produce reliability and bias would be by news outlet (e.g. Reuters would have a high RB score and Infowars would be very low), however, this might be too blunt an instrument and RB scores for some or all articles might be required.
2. Who decides what you see?
Firstly, there is the problem of 'the algorithm', that is, the programming that determines what you see in your feed. Facebook and other social media providers follow a similar playbook here. At the moment it's designed to show you more of what you like and that goes for news too. It's designed to maximise views and keep you returning to the platform.
This is a major oversight when it comes to news, because news isn't supposed to be what a person likes and agrees with, it's supposed to be truth telling about what is actually occurring. In short, the algorithm needs changing to show users diverse opinions that people will agree with and disagree with. This will lead to less likes and probably less views, but it's far more socially responsible.
If it's inappropriate, distorted or fake should you be allowed to see it? Facebook has been battling this challenge for years. In the last few months they have implemented a permanent, independent oversight board of leading experts to determine what content should be allowed and what should not. This is well chronicled by the New Yorker in a great piece of journalism by Kate Klonick called The Supreme Court of Facebook.
News isn't supposed to be what a person likes and agrees with, it's supposed to be truth telling about what is actually occurring
There are many ways that such a system is helpful, a few in particular is that it distances Facebook itself from being the arbiter of content decisions, and it adds consistency to their approach and stops 'executive oversight'.
Unfortunately there are many more ways in which this system is also imperfect, e.g. It only targets Facebook not other social media platforms, there are differences in culture and geo-political circumstances that make some content appropriate in one region but completely inappropriate in another (some great examples of this are covered in Kate Klonick's piece), it could also still be subject to bias, and it's unlikely to react fast enough to rapidly changing news stories.
So while the 'supreme court' initiative has merit - it's not going to be a complete solution in itself. Another potential solution is to separate news from other content in social media platforms. Facebook tried an experiment on this back in 2018 and it ultimately failed. They found that fake news proliferated and it was more difficult to de-bunk. It has left me wondering however, whether this solution in combination with better news tagging and bias oversight would work better together.
3. What should actually be done to the content in question and when should it be done?
What are the rules around what to do to inappropriate content? What do we do to content that is 'possibly inappropriate' but not yet verified? Should repeat offenders be banned?
These tough questions are often being asked at the moment as a part of public discourse. I would suggest that if items 1 and 2 above are better addressed then the solution to this item becomes far simpler. Metadata plays an important part here For example if a news article is tagged as 'news', and has an assigned Reliability/Bias (RB) score then a determination can be made. Some of the rules might look like this:
- If the item does not have an RB score, tag the post as 'unverified news' and show that in the feed. If the item is not verified within 12 hours then suspend it until it is verified.
- If the RB score falls below a minimum acceptable threshold, remove the item completely and advise the author. (i.e. This information is verified as false)
- If the RB score is low then mark the item as 'unreliable or heavily biased' in the feed and give the article a time limit expiry. Do not allow the article to be re-shared.
- If the RB score is moderate then mark the item as 'potentially unreliable or biased', but allow the item to remain.
- If the RB score is high then mark the item as 'verified'
Of course from this then user ratings can follow e.g. A user who constantly posts very low RB scored items may be suspended or banned, whereas a user who constantly posts very high scores may be rewarded with a status badge or icon.
Finally, should permanent user bans be allowed? I don't believe so. In the world of social media a lengthy ban (such as 1 year) is almost equivalent to a complete ban. If this was further supplemented by removing all followers of that person (so when they do regain access they would need to build their base up again) then is has an even more potent effect.
A solution blueprint to combat fake news...
As stated earlier, there is not one solution required, but many working together. Here's my 10 point blueprint - it would take time to implement, but this is not an easy problem to fix:
1.) Ensure that independent free, centrist, press is supported by governments as much as possible. If there is no reliable reporting and too much bias then everything written here is a moot point.
2.) Ensure that all social media companies tag news links as news. This means that there must be a reliable way of distinguishing actual news from other content.
3.) Ensure that there are independent established centres available that can evaluate news reliability and bias (similar to Ad Fontes mentioned earlier). It must also be ensured that their methodologies are rigorous and accurate (through either legal or regulatory means).
4.) Mandate that all social media companies use (and possibly pay for) these independent established centres to provide RB score. This would need to be enforced through government legislation.
5.) Each news item/link must have an associated RB score or be marked as 'unverified'. Unverified or low RB score items must have an expiry time limit and their ability to be propagated should be limited.
6.) Additional rules should be put in place to discourage users from promoting low RB scored items and rewarding them for promoting high RB scores.
7.) Ensure that the news feed algorithm uses the RB score to balance the content seen by users. There must be as healthy balance of left/right bias and some variance in reliability. This must occur regardless of what the user views or what they click on.
8.) Use an additional independent authority globally to decide on appropriateness of content with sub-chapters for regions or countries - this is similar to what Facebook is doing at the moment but it should extend to all social media. This would assist with extremely divisive issues such as abortion, euthanasia, etc.
9.) When users continually promote low RB scored items they should be suspended. The harshest suspension should be approximately 1 year with all followers removed.
10.) Get social media companies to report to governments on the status and trends of news reliability/bias at least once per year.
A final word
Would implementing this blueprint completely solve the problem? No. I can think of a variety of scenarios where it would run into problems. It would, however, be a good starting point.
There are also many other solutioning questions to be considered here including but not limited to: government and regulation, cultural and geo-political appropriateness, language, definitions of 'media', and use of technology such as AI/machine learning. But hey, let's start with the basics and work our way forward. As the President said, it's the only way to 'defend the truth, and defeat the lies'.
Practice Lead | Digital Transformation | Delivery | Customer Focused
3yVery good article Michael Billimoria 👍 I liked the RB index particularly
Enabling improved experiences and business productivity through transformative digital technologies.
3yThought provoking - thank you!
Director, Operational Services
3yI like your thoughts and possible solutions on this Michael Billimoria