The oversight board took on its first five cases in December 2020. While one of the posts included in the review was from the US, altogether the posts came from four different continents, all of which could view the statements made in completely different ways. Because of this, the policies that Facebook sets in motion need to be concise and need to work with whatever community the moderation tools might be focusing on. “Facebook’s ‘independent review’ activity must be consistent across international borders,” Jim Isaak, a former chair of the Institute of Electrical and Electronics Engineers, and a 30-year veteran of the technology industry, wrote to us via email. “But what is ‘hate speech’ in the US might be defined as patriotic in other autocratic societies—increasing the complexity of what is done.”
Lines in the Sand
This need for consistency and more concise rules is already coming into play. Of the five cases Facebook’s oversight board took on in December, the group decided to overturn four of the cases, with two of them clearly showing the need for better moderation. In one of the overturned cases, the board ruled in favor of a woman whose Instagram post about breast cancer was removed from the website automatically for breaking its adult nudity and sexual activity policy. While Facebook already had restored the photograph, the board showed objection to it being removed in the first place. The board even recommended that Facebook put an appeal systems in place allowing users to see when a post has been taken down, why it has been taken down, and even suggesting ways to speak with a human being to seek a resolution. The board found that, while the woman had shared a post that featured uncovered and visible female nipples, the photograph had not broken Instagram’s Community Guidelines. The Adult Nudity and Sexual Activity standard Facebook upholds in its community standards allows nudity when the user is seeking to raise awareness for a medical reason or other cause. Another post, which was shared from Myanmar, included language about Muslims that the board said may be considered offensive, but did not reach the level of hate speech to justify it being removed or considered against the rules. This is where things start to get especially tricky.
Which Way is Up?
“Facebook operates internationally,” Isaak told Lifewire via email. “Each jurisdiction has its own rules, and Facebook may be held accountable under those in other countries.” Facebook has to keep all the rules of the territories that it operates within in mind when setting up new policies. By making policies unclear, Facebook is leaving room for errors that could lead to the oversight board needing to overturn more cases in the future. With the spread of hate speech and misinformation becoming so prevalent—especially on social media like Facebook and Twitter—it’s important for these companies to offer clear guidelines that can then be used to moderate the community. Of course, there are always other options to help mitigate these kinds of problems. In fact, one of the cases that the board originally was intended to oversee in December was removed from the docket following the deletion of the post by the user. User-generated moderation is something we’ve already seen be successful on websites like Wikipedia, and recently Twitter, itself, stepped up by releasing Birdwatch, a community-powered moderation system to help stop the spread of misinformation. These methods have other issues, though, which is why getting a standard baseline for community expectations will be key to Facebook offering better moderation of its apps and websites in the future.