The US 2020 Election: Social Media and Misinformation
Since the controversial 2016 US Elections, social media platforms have been under scrutiny because of their ability to offer platform users far-reaching and highly optimised systems for making posts and advertisements.
The Trump Administration has employed tactics of misinformation through factually inaccurate political advertisements which are not subject to fact-checking, unlike commercial advertisements which are screened by third-party authenticators. Evidence of this has been seen in Donald Trump’s current 2020 Election advertisements on Facebook which include doctored footage of his political opponent Joe Biden to make him to appear older.
In addition, political parties and organisations internationally have utilised Facebook to manipulate the opinions of voters through inauthentic accounts and activity.
Facebook whistle-blower Sophie Zhang released a 6,600-word memo which detailed evidence of politicians from the US, Brazil, Italy, Honduras and other countries creating millions of fake accounts to falsely support parties, or criticise political opponents. The data-scientist claims that management within the company was aware of the blatant attempts to abuse the platform which was not prioritised and with no dedicated resources, millions of these bots and false accounts remained active.
Facebook earns billions of dollars from political advertisements and despite wide criticisms, has failed to implement political fact-checking or the prohibition of political advertisements. Twitter, however, has taken a tougher stance and in October 2019, CEO Jack Dorsey announced a total ban of political advertisements on their platform.
The social media giant has also undertaken to closely monitor election-related tweets, aiming to remove and label those which contain misinformation more strictly regarding political parties and posts which falsely claim party successes.
While Facebook is yet to remove political advertisements containing false information, it opted to include warning labels beneath inaccurate posts relating to the election, such as a recent Donald Trump post which criticised the security of the voting-by-mail process.
The warning label reads:
“Voting by mail has a long history of trustworthiness in the U.S. and the same is predicted this year.”
Twitter mirrored this approach by including a warning label on a Trump tweet with the same effect.
Further, the social media giants have removed posts and tweets discouraging voting, those providing false and inaccurate information surrounding the voting process, and posts which claim that people will contract COVID-19 if they vote.
Facebook also claimed it will prevent politicians from uploading new political advertisements on the week leading up to Election Day. However, it will still allow existing ads to be posted during that week, and the advertising of new political posts can resume after Election Day. The effectiveness of this strategy has been questioned by critics who say misinformation from existing advertisements and unrestricted personal posts can still impact voters.
In addition to these strategies, in September 2020, Facebook and Twitter introduced informative election platforms which aim to provide centralised resources for social media users.
Facebook’s ‘Voting Information Center’ and Twitter’s ‘Election Hub’ contain non-partisan, independently verified information on how to register for voting, real-time political updates, and how to stay safe voting during the COVID-19 pandemic. This initiative responds to statistics revealing that almost 40% of eligible voters in the US are unregistered, and based on a Twitter poll – over half its users were interested in voting, but needed more information on how to.
While it is too early to witness the effectiveness of these initiatives, it is apparent that social media companies are beginning to understand the significance of their influence in the political sphere.
By Christopher Diab