By Tracey Dowdy
It’s nothing new for Facebook to be under scrutiny for fake news and hate speech. It’s been an issue for years and was never more evident than in the wake of the 2016 presidential election. They’ve made concerted efforts to rein in misinformation, but it’s an ongoing battle.
Facebook has been open about the challenges both human reviewers and AI have in identifying and removing offensive content. While things have improved, the number of users posting makes it challenging to curate information accurately.
One area where their efforts are glaringly deficient is the amount of COVID-19 related misinformation in languages other than English. Avaaz, a crowd-funded research group, analyzed more than 100 pieces of Facebook coronavirus misinformation on the website’s English, Spanish, Portuguese, Arabic, Italian and French versions.
They found that:
- It can take Facebook up to 22 days to issue warning labels for coronavirus misinformation, with delays even when Facebook partners have flagged the harmful content for the platform.
- 29% of malicious content in the sample was not labeled at all on the English language version of the website.
- It is worse in some other languages, with 68% of Italian-language content, 70% of Spanish-language content, and 50% of Portuguese-language content not labeled as false.
- Facebook’s Arabic language efforts are more successful, with only 22% of the sample of misleading posts remaining unlabelled.
- Over 40 percent of the coronavirus-related misinformation on the platform — which had already been debunked by fact-checking organizations working alongside Facebook — was not removed despite being told by these organizations that the content was based on misinformation.
Avaaz’s research led Facebook to begin alerting users if they’d been exposed to false information. So, according to a Facebook blog post and a report from BuzzFeed News, both Facebook and YouTube are cracking down yet again and using AI to weed out the volumes of misleading content.
Facebook has been forced to rely more heavily on AI as the COVID-19 pandemic has reduced its number of full-time employees. They still rely on contractors, many of whom, like the rest of us, are working from home. The content review team prioritizes posts that have the greatest potential for harm, including coronavirus misinformation, child safety, suicide, and anything related to self-harm.
CEO Mark Zuckerberg said, “Our effectiveness has certainly been impacted by having less human review during COVID-19. We do unfortunately expect to make more mistakes until we’re able to ramp everything back up.”
Currently, if a fact-checker flags a post as false, Facebook will drop it lower on a user’s News Feed and include a warning notice about the veracity of the content. The challenge in removing misinformation is that it’s much like dandelions on your lawn – you can remove them from one spot, but there’s already countless more popping up somewhere else.
Facebook uses a tool called SimSearchNet to identify the reposts and copies by matching them against its database of images that contain misinformation. The problems stem from users being quick to hit the “Share” button before checking to see if the source is a reputable organization.
Facebook Chief Technology Officer Mike Schroepfer admits AI will never be able to replace human curators. “These problems are fundamentally human problems about life and communication. So we want humans in control and making the final decisions, especially when the problems are nuanced.”
As Abraham Lincoln warned Americans during the Civil War, “You can’t believe everything you read on the internet.”
Tracey Dowdy is a freelance writer based just outside Washington DC. After years working for non-profits and charities, she now freelances, edits, and researches on subjects ranging from family and education to history and trends in technology. Follow Tracey on Twitter.