Facebook has long touted the efficacy of its AI-powered tools for helping weed out prohibited content, but the company today conceded that its technology is simply not up to meeting a recent European Court of Justice (ECJ) ruling requiring it to use automated filters to detect defamatory content.
Earlier this month, the ECJ arrived at a controversial ruling in a case brought by Austrian Green politician Eva Glawischnig-Piesczek against Facebook Ireland, where the social media giant’s EU HQ is based. The original complaint goes back to 2016, when an anonymous Facebook user posted an article from an Austrian newspaper discussing Glawischnig-Piesczek and her party’s views on immigration — alongside comments involving words such as “traitor,” “corrupt,” and “fascist.”
After Facebook failed to remove the post or pass on the user’s real identity, the Austrian Green Party pursued the social media giant through local Austrian courts, which ruled that Facebook should delete the original post and every verbatim re-posting — a ruling that was to apply globally. This led Facebook to challenge the ruling at Europe’s highest court, which subsequently agreed that Facebook must remove such postings from its platform globally. While the decision generated criticism over whether a single European country should be able to dictate which content another country can view, digging into the details of the case revealed that Facebook would likely have to use automated tools and filters to identify social media posts that were regarded as “identical” or “equivalent” in content, which may completely ignore the context in which a post is shared.
The upshot of this is that perfectly legal posts on Facebook’s platform could end up in the cross fire, simply because the company’s AI-powered algorithms aren’t smart enough to identify the context of a post or any nuances in the sentiment behind it.
“This judgment has major implications for online freedom of expression around the world,” noted Thomas Hughes, executive director at freedom of speech campaigners Article 19, at the time. “Compelling social media platforms like Facebook to automatically remove posts regardless of their context will infringe our right to free speech and restrict the information we see online. The judgment does not take into account the limitations of technology when it comes to automated filters.”
Scale
With more than 2 billion Facebook users, sheer scale makes it hugely difficult for humans alone to monitor abusive content. This is why the company has in recent years turned to automated tools to tackle such major issues as the sharing of child abuse images and content from organized hate groups. Earlier this year, Facebook announced that its AI smarts can now identify 96.8% of prohibited content, but faced with the prospect of having to understand “identical” or “equivalent” posts, the company said the accuracy of its technology just isn’t where it needs to be.
“While our automated tools have come a long way, they are still a blunt instrument and unable to interpret the context and intent associated with a particular piece of content,” admitted Facebook’s VP of global policy management Monika Bickert, in a blog post. “Determining a post’s message is often complicated, requiring complex assessments around intent and an understanding of how certain words are being used.”
In truth, trying to implement such a broad take-down policy is perhaps too great a challenge for even the deep-pocketed Facebook. In a theoretical example, two people could share a link to a news article that’s critical of a politician, with one person condemning the report and the other supporting it. One of the users might even post a comment that appears to support a defamatory claim but that most humans would easily identify as sarcasm.
“Context is critical, and automated tools wouldn’t know the difference, which is why relying on automated tools to identify identical or ‘equivalent’ content may well result in the removal of perfectly legitimate and legal speech,” Bickert added.
This isn’t the first time Facebook has conceded the real limitations of automation in monitoring content. In the wake of the Christchurch terrorist attack earlier this year, when the perpetrator livestreamed the murders of dozens of people, Facebook said AI alone would probably never be able to prevent people from livestreaming violent actions. The company has hired more than 15,000 human content moderators around the world to augment its automated tools and intervene in edge cases. So if Facebook is going to comply with the ECJ ruling, it may just have to hire more human moderators to help the machines figure out what’s what.
In a publicly broadcast Q&A with employees in early October, Facebook CEO and cofounder Mark Zuckerberg indicated that the company, and other organizations, would be going back to European courts to seek clarity on how to effectively vet such content.
“This is one where a lot of the details of exactly how this gets implemented are going to depend on national courts across Europe, and what they define as the same content versus roughly equivalent content,” Zuckerberg said. “This is something we and other services will be litigating and getting clarity on what this means. I know we talk about free expression as a value, and I thought this was a fairly troubling development.”