The truth about fake news: how and why did we get here?
What is the connection between our human evolution, the technological revolution and the fake news: how did we end up morphing the truth into something so relative?
The study of the fake news phenomenon and its impact have been on my mind, especially this past year. Below I’ll line up a few of the conclusions reached. This is the short – yes, the short version.
For start, this is the stage we reached:
Such a brainwashing maneuver has a complex explanation – it has a great deal to do with the technology advances and the new digital media business models, but also to more traditional sciences, such as sociology, psychology, and even anthropology – because fake news was not born in the Facebook age, but 150.000 - 200.000 years ago when probably one of the first Homo Sapiens came back with some very interesting hunting stories.
But let’s begin with today’s context.
I. The cool factor, which is ordinary business in digital & tech: “disruption”
Zigzag at pretty much any gig around the world, and by gig I mean digital &tech conference – you’ll stumble across the most used and abused word that has been thrown away everywhere, grating our nerves “disruption”.
The “evergreen” deck slide that stands by the statement, usually looks like this:
Where is the connection with the fake news?
Of course, Google and Facebook stay at the forefront of the democratization of news and content, reforming the way we consume online content. Just like Airbnb disrupted the hospitality industry without even having a hotel in their assets and Uber accomplished to stir up the taxi industry without owning a car, Google and Facebook altered two fundamental elements:
· The way we find and consume news, without the middlemen – newsrooms, editors and journalists.
· The way advertising finds its ideal target, making the direct peer to peer negotiation completely irrelevant (meaning brand and media owner)
Automation is key to efficiency, which is why Google and Facebook have astounding performance rates of 1.4-1.6 million dollars revenue per employee: one of the reasons of their achievements – besides their admirable innovation capability – is that their human workforce is smaller than their automated one – be it algorithms or robots)
In other words, as far as the content distribution is concerned, as well as the advertising revenue model (advertising content distribution), “the useless” and expensive professional is either made completely redundant or radically diminished.
In plain English: access to audience is democratized and considerably, robotized.
Which is why, the average Joe with a bit of wit and charisma, and a Facebook or YouTube account can become famous overnight and if he cares enough, can presume of being a “journalist” – without the hustle of going by the book and obeying the unwritten professional standards of the job – similarly, anyone can buy advertising on the global platforms, without futile conversations about the quality and truthfulness of the messages, the accuracy of the target group and the price negotiations with other salesmen.
Anyone can command attention and generate traffic and even make money out of it. The moral filter is vague, all that matters is the amount of traffic and, implicitly, the quantity of data gathered – data that allows companies to find their audiences, again, in an automatized/ programmatic manner.
(At the end of the day, buying pickles, washing powder or medicine is not rocket science. On the contrary, less responsible companies embrace a less educated consumer, who has no queries and does not raise any issues before spending the buck. The same logic applies in politics – wink, wink).
Is it efficient? Definitely. Is it “healthy” in the long run?
The replacement of the rational human filter with the yet immature algorithms generates, on a daily basis, increasingly consequential problems: equivocal “news” reach the public unrefined, as much as the advertising reaches micro-audiences that before were more difficult to coagulate.
Both Google and Facebook diminished the credibility of established news brands and enabled dubious sources and perspectives to reach mass audiences.
The algorithms aimed to bring together the strawberry jam oat filled muffin lovers from all over the world, are also gathering, via the same infrastructure and mechanisms, people who share other kinds of passions or interests, leading to online extremism, grooming and more troubling behaviors.
Up until now, (off the record) both tech giants have done an about-face to this issue rather than bating away the criticism with actual actions. We’ve been served some ridiculous self-pats on the back and excuses, such as:
Therefore, the first conclusion is purely technical: leaving aside the conspiracy theories that claim that it was all intentional, I believe that the tech giants have never been able to correctly anticipate the negative impact of automation, whose main purpose was, obviously, to be highly efficient.
A relevant example in the « innovation for innovation’s sake » series is VoCo – the experiment dubbed as “Photoshop for Audio”, by Adobe (an algorithm which, by ingesting approximately 20 minutes of voice data recorded, is able to edit speech by storing phonemes and replicating them according to the cadence and tones of that particular voice; manipulating voice means that you can put any word in anybody’s mouth).
Here’s a demo:
The algorithm was built mainly for the gaming industry as well as media – voiceovers in documentaries, news programs or commercials, but it goes without saying that in the wrong hands it could just feed incessantly the fake news producers and mass manipulation entities. (if they don’t already have it).
The borderline between fiction and reality becomes completely blurred by technology.
For example, you can have Obama say whatever you want:
How is this possible and why is it happening? It’s not just me saying…
Engineers and programmers, focused exclusively on solving a technical challenge, programming/automation enthusiasts but oblivious of the complexity of human nature and the governing mechanisms, don’t correctly anticipate, the many ways their work will be used in bad faith.
Or, they simply become enchanted with an idea and cannot resist to implement it. Just to prove it can be done.
This is what led to the obsession for “disruption” outside the hospitality, taxi or retail industries. And it didn’t “disrupt” only the “old school” journalism, which meant a moral compass be bestowed upon an entire newsroom before the print version got out. It also triggered a much more alarming phenomenon:
The public’s perception of the actual truth, as a concept, is changing.
NewsFeed – a name given to a chaotic thread of posts, the vast majority having nothing to do with the meaning of the word News, changed the entire understanding of what a piece of news means (and, subliminally, of what truth means).
The impact is even more concerning in countries such as mine (I'm based in Eastern Europe, Romania), where civil education and the media itself are frail, and the tradition of news consumption was at its very shy beginnings before the fake news tsunami even hit us.
It’s such a difference to come, let’s say from the US (still, a democracy), where – for instance – The New York Times goes back to 1851 and has been continuously published with a certain balanced editorial stance.
By comparison, the information culture in Romania is very raw - quality press timidly managed to consolidate itself at the beginning of the 2000 – before being captured by personal and/ or political interests, when the financial global crisis hit. And then, the final strike came from Facebook & Co.
In the context of Trump voters stating the following, in a country like the US: “If Jesus Christ gets down off the cross and told me Trump is with Russia, I would tell him, ‘Hold on a second. I need to check with the President!”, we could hardly ask for more...
This is where we stand from social, technical and business points of view (yup, fake news is an industry, became a complete ecosystem, with multiple players, fake ads systems and so on – I will come back on this topic. Moreover, Facebook and Google don’t seem to undertake a concerted effort to crack down the fake news dissemination, and its side effects, actually contributing unwillingly to the propagation and prosperity of industry. Their advertising and automatic monetization tools are, much too often, too flexible with the standards imposed on fake news producers.)
Good. If you’re still bearing with me, let’s get to the next level: the anthropological and psychological one.
II. We love the story more than the truth: reality is, quite frequently, boring and completely uninteresting.
Let’s commence the second part of this article with the thoughts of three bright men: Harrari, Twain and Asch. An anthropologist, a novelist and a psychologist, whom probably would have never expected to be together in the same article (at most, they could make the intro to a really good nerdy joke).
One central idea in Yuval Harrari’s Sapiens (reading recommendation) is that the essential element that contributed to the evolution of the human species is our fascination with myth and fiction, as well as the ability to create and believe in stories ( even when the evidence is in strong contradiction with the narrated imaginary)
“The truly unique trait of Sapiens is our ability to create and believe fiction.
All other animals use their communication system to describe reality. We use our communication system to create new realities.”
“Humans have an amazing capacity to believe in contradictory things. For example, to believe in an omnipotent and benevolent God, but somehow excuse Him from all the suffering in the world.”
Harrari considers that precisely this ability allowed the Homo Sapiens to slowly prevail over other humanoid species and conquer the planet: the ability to believe in myths, fiction, ideals and common interests – extensive collaborations that could move individuals who don’t know each other but come together for the same purpose.
Nonetheless, Mark Twain is believed to be the author of a rather bitter sweet quote, that the truth sometimes gets in the way of a good story. In all honesty, it’s not far from the truth (sic): the truth is, sometimes, dull, which makes us “improve” it, generating conspiracies, or at least dramatizing the actual happening.
It’s our human nature, since the cave man ages, when, after hunting, people would get around the fire for slightly “enhanced” stories. <We’re natural born storytellers>. We are imaginative and love fiction. It’s in our DNA.
Last but not least, years of studies and experiments have led Solomon Asch, the renowned psychologist, to conclude on the conformity paradigm, which sums up the experiments conducted in the following statement:
“A person’s opinions and actions are influenced by the actions and opinions of the majority of a group”.
(This very picturesque character, embraced recently by local advertising, embodying an average mountain shepherd would have named this, without any scientific back up, the “herd instinct”- but let’s move on)
Therefore, as humans, we love fiction, we want to believe in stories more than in the truth and are easily susceptible to the power of the others. This is the perfect context for the new industry to flourish!
Just like so many other cases, technology has amplified and coordinated at global scale a human trait that already existed.
However, to what extent is this dissemination of disinformation able to generate strong trends of online public opinion?
There is an example:
Social Chain – a social media agency and network run mostly by people in their 30s – featured at the 2017 edition of the iCEE.fest, a case study that really stroke a chord (Watch video or in app of the full presentation – if you have a ticket for 2018 or have already been to the festival in 2017).
Very briefly, the team invented a soccer player (to stir interest around the SoccerEx event): Rex Secco. When the story broke that Arsenal (of all teams) had signed the unheard-of 16-year-old Rex Secco for £ 34million on Sunday afternoon, the hoax generated a craze in the social media (eventually reaching the main publishers, as well ).
Garphics and rankings appeared:
The topic soon trended:
And then, the inevitable occurred. The know-it-alls appeared, knowledgeable about the topic and willing to pass the learnings to the rest:
What’s really scary? This comment was tweeted 33 minutes after the “breaking news”!
This is how fast perceptions can be triggered, in the online (which is also the reason why most PR crisis are way amplified before the companies – seldom, very late, take any action).
In order to generate a perception, via social media, there are just a few elements to take into account: idea, context, story and deceitful details which should be (or seem) credible. Most certainly, with the right strategy (and some budget), there are enough people to be motivated to generate « the truth ».
To quote on Ivanka Trump:
“Perception is more important than reality. If someone perceives something to be true, it is more important than if it is in fact true.”
Very much her father's daughter, she pulled this statement, not anywhere else where it may be challenged, but in a book she wrote. Can you take her down for that?
III. Humans or robots: who’s amplifying the fake news effect?
I think it’s immensely convenient that we are all clear on one thing so far – neither robots/ algorithms nor humans are perfect yet. For this final part, I think it’s important to get one last dilemma straight: what should concern us more - the human or the machine that the human created?
The pre-election day in the US, the reach of the total amount of blunt fake news – such as The Pope is supporting Trump for President - completely outrun the legitimate news sources.
The graphic below sums it up and is more than self-explanatory (for more, check here).
Our instant reflex would be to blame the “flawed” social media algorithms. However, the real deal does not make us, humans, the knights in the shining armour: robots make relatively correct decisions, initially, but humans are amplifying so much the nonsense that, sometimes, robots become stupid as well.
In this sense, an extensive study by MIT (Massachusetts Institute of Technology) was made public a few days ago.
Researchers Soroush Vosoughi, Deb Roy and Sinan Aral studied a data set of 126,000 rumoured stories - fake as well as partially fake and partially true, distributed by over 4.5 million times and approximately 3 million people on Twitter between 2006 and 2017.
Here are a few conclusions on the devastating effect of the parallel reality, created and animated mainly by humans, via social media/ technology:
· Fake news in politics are the most widespread, are the fastest to circulate and have a ‘contagious index’ that goes above any other popular categories (urban legends, business, terrorism and war, science and technology, entertainment and natural disasters).
· Fake news reach more people than real facts: false tweets and re-tweets cascades reach 100.000 people whereas the truth barely maxed out at 1000 people; “manufactured” sensationalism and novelty is infinitely more interesting than what’s happening in our real lives.
· The algorithms diffuse information in a balanced way, obviously without separating the real from the fake, but it’s the humans’ rapid and enthusiastic reaction to fiction that generates the viral effect which, eventually gets amplified by the robots (correctly interpreting the increasing interest on a topic)
· Tweets and re-tweets cascades diffusing falsehoods reach a depth of 19 levels or “unbroken retweet chains” from the initial diffuser, whereas the truth barely gets to 6-7.
· False political stories travel faster and farther, reaching the critical mass of 20.000 users (a sort of “tipping point” indicator, which can unleash the viral effect) 3 times quicker than the real news.
· Fake news are 70% more likely to be shared than the truth
The study – which is much more extensive can be downloaded here – gives out other statistics and explains the fine tunings that make people share this kind of information : it all comes down to the desire of being « cool », of feigning knowledge of something that others don’t know or don’t understand ( and evidently, it’s more probable to find out something that doesn’t exist, right ?), of showing off and posing superiority. Other valid explanations, here.
Ironically, artificial intelligence only brilliantly amplifies and exposes to its maximum, human, natural stupidity.
Then the “smart guys of the internet” are making the most of it – whether we think of the harmless, who are just turning an honest penny (so be it) by exploiting the tolerance of the global platforms regarding quality content that they process through their automated advertising tools (like Google Display Network or Facebook Audience), or the more dangerous ones, involved in more organized and ample operations, that are aiming at manipulating public opinion and have a major social and political agenda.
In other words, in a new unprecedented global context (over 4 billion people, out of the total 7.5 billion, have online access), there are tools that exploit a weakness that is, paradoxically, a unique trait of our species: the passion for myth, fiction and storytelling. For both financial and propaganda purposes.
Of all the reasons that I mentioned so far – but also because of other factors that I will cover, separately – it’s clear that the confrontation between the truth (as insipid and dreary as it may be) and the technology will not end soon.
The stakes are high and the winner is uncertain. Especially since the global digital giants, who bred and fed the monster suffocating our reason, don’t seem in a hurry (or cannot yet) kill their own offspring.
#ToBeContinued #Maybe ;)
[Author: Dragos Stanca]
Project Manager | Scrum Master | SAFe 6.0, PSM, Prince 2 | CIPP/E, CIPM, CEH | solely my views
6yWorth adding availabilty bias: we tend to believe information we are frequently exposed to. Daniel Gilbert tackles this cognitive distorstion pretty well: “ if asked to guess the number of annual deaths in the United States by firework accidents and storms versus asthma and drowning, most people will vastly overestimate the former and underestimate the latter. That’s because we don’t see headlines when someone dies of an asthma attack or drowns, Gilbert said. “It’s less available in your memory, but it is in fact more frequent.” https://news.harvard.edu/gazette/story/2012/02/right-choice-but-not-the-intuitive-one/