COVID-19 has taken the world by storm. And being the first pandemic our generation has lived through, it has brought much confusion and misinformation.
Since talks of a COVID-19 vaccine started coming up, so too have numerous myths and false information surrounding it. And with conversations on social media spreading like wildfire through real people’s accounts, it can be difficult to tell the difference between facts and fiction.
Here’s what social media platforms can do to tackle the issue.
Social media is an extension of our lives, so information shared online can spread like wildfire. When COVID-related misinformation is circulated, it creates uncertainty, skepticism and distrust, which can lead to people rejecting proven public health measures.
This becomes easier when the public is not well-informed about COVID-19 and vaccines, and has a lack of trust in authoritative sources, such as the government, scientists and healthcare professionals.
In June 2020, BBC News reported that social media companies were not doing enough to curb the spread of false news relating to COVID-19 and the COVID-19 vaccine.
Facebook has since released a statement claiming to take action against this by sharing and promoting authoritative information about COVID-19 vaccines, as well as combating misinformation by removing false claims on the topic.
In a report released in March 2021, the Center for Countering Digital Hate found that of the anti-vaccine content that had been circulated on Facebook and Twitter more than 812,000 times between February 1 and March 16, 2021, 65 percent came from just 12 accounts.
These include prominent public figures such as Children’s Health Defense Chairman Robert F. Kennedy Jr. and the entrepreneur Joseph Mercola.
Image 1 of 2
Image 2 of 2
Social media companies have employed various initiatives to combat the spread of COVID-19 misinformation. That’s why you’ll see a lot of pandemic-related information as you open your social media apps or scroll through your feed.
The home section of the YouTube app, for instance, features a post on COVID-19 from the World Health Organization. When you click on Learn More, it takes you to a resource center which includes myths, advice and Q&As.
YouTube also removes videos that violate its COVID-19 policy.
When you search the COVID-19 hashtag or topic on Twitter, at the top you’ll find a resource portal and options to find out more about COVID-19 and vaccine-related information in your country.
You might spot a warning under posts with misleading COVID-19 information, too. Any tweet from a user promoting 5G conspiracy theories has a blue exclamation mark with a message from Twitter on getting the facts about COVID-19 and links to a story debunking the claim.
Facebook has banned more than a billion fake accounts, taken down 12 million posts that spread misinformation about COVID-19 vaccines, and says it has hired fact-checkers around the world.
In addition to removing repeat offenders, social media platforms can take the following steps toward decreasing the spread of misinformation online.
1. Tap Into Celebrities and Religious and Community Leaders
Many people distrust figures of authority, such as the government. Social media companies can partner with celebrities and religious and community leaders to post videos presenting factual and myth-busting information online.
They can also share their personal experiences around getting vaccinated.
2. Establish Shared Online Standards Across Social Media Platforms
Social media platforms have algorithms to detect fake news. However, they can take it a step further by establishing a shared online standard of conduct regarding the treatment of fake news.
3. Establish a Clear Threshold for Enforcement Action
A low threshold, such as two strikes, would allow for moderate enforcement prior to removal—such as the restriction of a page’s ability to go live or post video content without moderated review.
This would still allow the user to exercise their right to free speech while being moderated for the type of information they upload.
4. Display Corrective Posts to Users Exposed to Disinformation
Social media platforms can show users who have been exposed to content in violation of the COVID-19 policy corrective posts from experts and trusted sources.
5. Add Warning Screens When Users Click Links to Misinformation Sites
Companies should consider adding a warning screen in front of third-party websites or untrustworthy sources housing COVID-19-related misinformation.
6. Ban Private and Secret Anti-Vaccine Facebook Groups
Ban private groups that traffic primarily in vaccine disinformation and prevent groups that require a Facebook disclaimer from existing as private or secret groups, as anti-vaxxers rely on the privacy of these groups to spread dangerous anti-vaccine misinformation.
As the COVID-19 pandemic has highlighted, the role played by social media in the spread of misinformation can have serious social and public health consequences. Mass vaccination remains the most likely successful strategy to achieve long-term control of the pandemic.
In order to improve COVID-19 awareness and vaccine uptake, social media companies have a key role to play in ensuring that the correct information is presented to users on its platforms, while tackling misinformation.
Want to get away from fake news? You can use these smart apps to get fact-checked and neutral news instead.
About The Author