Fake News, False News & Steve Bannon’s Mindfuck Machine: What are the Social Giants Doing to Protect the Truth?

Recently, Twitter has taken a big stride towards cracking down on the spread of fake news and propaganda. But are big brothers Facebook and Google going to follow suit?

In this age of post-truth politics, sensationalist journalism and clickbait headlines, an MIT study has found that, online, fake news spreads much faster than real news – especially on quick and dirty sharing platforms like Twitter.

According to study co-author Professor Sinan Aral: “We found that falsehood diffuses significantly farther, faster, deeper, and more broadly than the truth, in all categories of information, and in many cases by an order of magnitude.”

Fake news has become an epidemic over the last decade or so, with social media playing an enormous role in the spread and amplification of misinformation. The 2016 American presidential election in particular has become a focal point for the serious real-world effect of fake news, with allegations of widespread manipulation of Facebook users by the Russians – not to mention the Cambridge Analytica scandal.

To set the seriousness of these concerns in context: according to the Pew Research Center, 44% of Americans get their news from Facebook, while an analysis by BuzzFeed found that 38% of posts shared from three large right-wing politics pages on Facebook included ‘false or misleading information’.

From #WhiteMonopolyCapital to the spread of misinformation about shootings in America, fake news has spread like a late-summer Table Mountain wildfire on Twitter,  largely thanks to a loophole that has allowed a single user to control a cluster of sock puppet accounts in order to release the same tweet en masse, flooding the Twittersphere and giving the false impression of a widely supported idea.

Especially in the case of Bell Pottinger’s #WhiteMonopolyCapital campaign, this loophole allowed ideas and misinformation to spread much more quickly than they would normally, giving the appearance that a multitude of users were touting the same views, when in actuality, it was only a handful.

Before we look at whether the social giants are seriously failing in their duty to put a stop to online misinformation on their platforms, an important distinction must be made between ‘fake news’ and ‘false news’.

Fake News

Fake news is defined by Wikipedia as ‘deliberate misinformation or hoaxes… written and published with the intent to mislead in order to damage an agency, entity, or person, and/or gain financially or politically, often using sensationalist, dishonest, or outright fabricated headlines to increase readership, online sharing, and Internet click revenue.’ More colourfully, PolitiFact defines it as ‘Made-up stuff, masterfully manipulated to look like credible journalistic reports that are easily spread online to large audiences willing to believe the fictions and spread the word.’

Some examples of fake news include headlines such as ‘Muslim Migrants Attack a Catholic Church During Mass in France’ along with a video, or ‘British Police Find Putin’s Passport at Scene of Salisbury Poison Attack’

False News

False News on the other hand is pretty much the same, but without the malicious intent. Usually, false news usually comes from misinformed social media posts by regular people that are seized on and spread.

One example is a tweet by Eric Tucker, who despite only having 40 Twitter followers, was responsible for one particularly vicious piece of false news with a post that was shared at least 16,000 times on Twitter and more than 350,000 times on Facebook.

According to the New York Times, ‘Mr. Tucker had taken photos of a large group of buses he saw near downtown Austin earlier in the day because he thought it was unusual, saw reports of protests against Mr. Trump in the city and decided the two were connected. He posted three of the images with the declaration: Anti-Trump protestors in Austin today are not as organic as they seem. Here are the busses they came in. #fakeprotests #trump2016 #austin’

What the Platforms are Doing About It

Up until now, social media’s big players have taken very little action to stop the spread of fake news. Facebook CEO Mark Zuckerberg went on record as saying, ‘Personally, I think the idea that fake news – of which it’s a small amount of content – influenced the election is a pretty crazy idea… Of all the content on Facebook, more than 99 percent of what people see is authentic.’ Twitter’s head of public policy for the United Kingdom Nick Pickles said that they ‘are not the arbiters of truth.’

This seems to be changing as public anger and mistrust grows in the light of Twitter scandals such as the Bell Pottinger #whiteminoritycapital phenomenon and Facebook’s Cambridge Analytica fall-out.

In October 2017, Twitter, Facebook, and Google executives appeared on Capitol Hill to acknowledge their role in enabling the spread of misinformation, whilein April 2018 Mark Zuckerberg  gave evidence to the Digital, Culture, Media and Sport Committee about the use of its data by the election consultancy Cambridge Analytica.

Since then, Twitter has tightened its policies on automation and has suspend thousands of suspected bot accounts. Facebook has also ended an experiment with its News Feed that ran in six countries and was shown to be magnifying fake news.

All three platforms, in partnership with the Trust Project, have also announced that they will display ‘trust indicators’ on their platforms that will provide extra context about a news site for readers interested in knowing more about a publication before reading a story. These indicators, which are already being used by publications like The Washington Post, The Economist and The Globe and Mail, will also provide information about whether a piece of content is news or advertising.

Starting on March 23rd 2018, Twitter is no longer allowing users to post to multiple accounts simultaneously. According to the Twitter development blog, this update was all about ‘keeping Twitter safe and free from spam…one of the most common spam violations we see is the use of multiple accounts and the Twitter developer platform to attempt to artificially amplify or inflate the prominence of certain Tweets.’

While this is a big step forward in the fight against the type of fake news that Bell Pottinger perpetrated in South Africa, there is no doubt that fake news, and false news, will be a problem for a long time to come.

Google has pledged to invest $300 million over the next three years to the The Google News Initiative, which will highlight accurate journalism and fight misinformation, particularly during breaking news events, helping news sites continue to grow from a business perspective and creating new tools to help journalists do their jobs.

YouTube has also updated its harassment policy to include hoax videos, while Facebook has launched the Facebook Journalism Project, hiring third-party fact checkers to verify news, and has once again changed its newsfeed algorithm, focussing more on shared stories and less on messages from brands, in an effort to create “meaningful interactions” and move away from clickbait.

This change has had a direct effect on company pages and advertisers. Because organic reach for company pages has once again been decreased, advertisers are having to spend more on promoted posts to reach their carefully cultivated follower base. This is without doubt a significant contributing factor to Facebook’s record $40 billion ad revenue windfall in 2017 alone, as it grew its share of the online advertising pie to just under 20% – see graph below.

Though these are major steps forward, the fight to end fake news is far from over and it remains to be seen how effective these measures will be in counteracting the spread of fake and false news.

With more and more evidence coming to light of the power of social media to influence sufficiently vast numbers of people to change the course of American, and therefore, world politics, how individuals and governments see these immensely powerful media giants has changed forever.

 

Steve Bannon’s ‘psychological warfare mindfuck tool’

The age of assumed innocence is long over. Facebook is no longer the friendly place where families share photographs and old acquaintances reconnect around the world.

We can expect to see regulatory measures – either self-imposed or set in place by public watchdogs – to restrain the abuse of what Cambridge Analytica wunderkind and whistleblower, Canadian Christopher Wylie, called ‘Steve Bannon’s psychological warfare mindfuck tool … a full-service propaganda machine.’

Quite another question however is whether advertisers, who are understandably and increasingly inclined to pay Facebook’s fees and contribute to its enormous profitability for being able to reach minutely targeted markets on an unprecedentedly massive scale, will at some point start to withdraw their support for the medium either on ethical grounds or because of pressure from supporters of their brands.

Or is that a bit like expecting a heroin addict to boycott her dealer because she doesn’t like his politics or the ethical status of his supply chain?

Infographic courtesy of Statista.com

Ke Poyurs
No Comments

Leave a Comment:

For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

If you agree to these terms, please click here.

What do you think of our website?

Close