In this case, the Oversight Board reviews Meta’s decision to remove a post by the Tigray People’s Liberation Front, combatants in the ongoing Tigray War in Ethiopia. The post warns Federal National Defense Forces soldiers to surrender or face death, and encourages them to turn their guns on Ethiopia’s president. The board asks whether “Meta should allow content that violates its Violence and Incitement Community Standard if the actions threatened, incited, or instigated are permitted under … the law of armed conflict.” The board also asks “whether [Meta’s] actions are consistent across different conflicts.”

Since February, Meta has made several exceptions to its Community Standards in response to Russia’s invasion of Ukraine, in some cases allowing content that would otherwise violate its Violence and Incitement policy. Meta recently requested a review of its special policies for the Russo-Ukrainian War, before withdrawing the request. Thus, while this case is not about Ukraine, Meta’s Ukraine policies should matter to this case. Meta’s approach to Ukrainian conflict-content, seemingly inspired by aspects of the laws of war, is a better fit for the realities of armed conflict than its Community Standards. However, Meta’s justification for this approach – national self-defense – makes it difficult to apply to other conflicts. This is perhaps illustrated by Meta’s failure to implement similar policies in Ethiopia. Relying on such a narrow justification will lead to inconsistent treatment of different armed conflicts, particularly those between state and nonstate actors. Instead of backing just causes by granting them exceptions, Meta should strive to apply standards inspired by the laws of armed conflict to the content of all combatants.

Toward Platform Laws of Wars

At the State of the Net conference days after Russia began its invasion of Ukraine, I discussed the need for platforms to establish their own “laws of war”, or alternative community standards tailored to the unique demands of wartime communication. Community standards designed for peacetime often produce unwanted outcomes when applied to conflict-content. In the face of this reality, Meta was quick to make exceptions to its existing policies for Ukraine. Meta’s decision to support Ukraine outright is laudable, but ultimately unsustainable. It will be more difficult to identify the “right side” in future conflicts. If Meta makes a policy of supporting just causes, it may end up turning a blind eye to ambiguous conflicts in which no just cause can be found. Instead of supporting one side over another, Meta should strive to enforce standards of just conduct in war. Social media platforms cannot hope to prevent war, or stop harm during wartime, but they may be able to curb unique wartime abuses.

Preventing the Greater Harm

Platform community standards are traditionally intended to prevent physical and emotional harm. During wartime, however, this goal becomes nonsensical. In war, combatants on both sides aspire to do harm to their enemies. Violence is justified as a way of preventing further or more lasting harms, such as subjugation by a foreign power. In light of this wartime prerogative, platform prohibitions on calling for harm or organizing harmful activities are an ill-fitting default that benefits less media reliant combatants.

Crowdfunding platform Patreon’s removal of fundraiser for the Ukrainian military illustrates how content moderation’s focus on harm can lead to perverse outcomes during wartime. Ukrainian NGO “Come Back Alive” uses donations to purchase protective equipment and ammunition for Ukrainian soldiers. In the early days of the war, Patreon removed Come Back Alive’s page because the platform prohibits fundraising “for anything that facilitates harmful or illegal activities.” While the extent to which the categories “harmful” and “illegal” overlap will always be debated, they move apart during wartime, when law provides for, if not encourages or mandates, doing harm. Indeed, to the extent that Patreon avoided facilitating whatever harm Ukrainians might have done with Patreon funded ammunition, Russians may have been able to do more harm to Ukrainian combatants, civilians, and infrastructure. Although this was not the intent of Patreon’s policy, it is the most likely effect. The Patreon example shows that both platform rules and the conceptions of harm that that justify them must be reconsidered in wartime.

A First Step

On March 10th, Meta took a first step toward wartime community standards. Reuters reported that Meta had relaxed its prohibitions on calls for violence and incitement when such speech was directed at the Russian military or leaders. Although an initial headline implied that the policy changes covered incitement directed at Russians in general, a quoted Meta email made clear that it only applied in the context of Russia’s invasion.

“We are issuing a spirit-of-the-policy allowance to allow T1 violent speech that would otherwise be removed under the Hate Speech policy when: (a) targeting Russian soldiers, EXCEPT prisoners of war, or (b) targeting Russians where it’s clear that the context is the Russian invasion of Ukraine (e.g., content mentions the invasion, self-defense, etc.)”1

This is a reasonable change that accommodates predictable nuances in the use of language during wartime. However, it is one-sided. It applies only to speech by Ukrainians about Russians, and not vice-versa. While the Russian government has since restricted access to Instagram in Russia, limiting the extent to which such an allowance would actually be used, this restriction was ostensibly a response to Meta’s relaxed incitement policies.

This is not to say that such a one-sided policy cannot be justified, indeed, Meta President of Global Affairs Nick Clegg grounds the policy in self-defense, and, to an extent, western public sentiment.

“… our policies are focused on protecting people’s rights to speech as an expression of self-defense in reaction to a military invasion of their country. The fact is, if we applied our standard content policies without any adjustments we would now be removing content from ordinary Ukrainians expressing their resistance and fury at the invading military forces, which would rightly be viewed as unacceptable.”2

According to Clegg, because Ukraine is on the receiving end of Russia’s invasion, Ukrainians’ violent speech should be viewed as a form of self-defense. This is a compelling justification, but it will create difficulties for Meta in the future. In this relatively clearcut conflict, Russia’s aggression legitimizes violent Ukrainian resistance. However, in other conflicts

Part of the novelty of this conflict for social media platforms, is that it is a war between countries rather than between state and nonstate actors. Apart from the short and less publicized Second Nagorno-Karabakh war between Azerbaijan and Armenia in 2020, social media has seen revolutions, coups, and insurgencies, but not traditional wars between states. It simply hasn’t been around long enough. (One shudders to imagine what sort of policies American social media platforms might have adopted had they existed during America’s 2003 invasion of Iraq) It is therefore understandable that in this first clear-cut clash of states, Meta has chosen to back what it sees as the just cause.

Nevertheless, this is not a path Meta should follow far. It is not always easy to pick the right side, and in many conflicts, neither side’s cause in clearly just. Regardless of platforms’ choices, taking a side will likely lead to market exclusion in the Meta-designated aggressor state. Supporting one side and suppressing the other will both limit access to counter-speech within the aggressor state and limit any positive effect Meta’s community standards might have on its soldiers’ conduct. Given the differences between conflicts and the grievances that spur them, it will be all but impossible to apply consistently. Thus, in order to best govern speech in future conflicts, platforms should attempt to enforce rules of just conduct inwar on both sides, regardless of the justice of their causes.

Just Conduct, Not Causes

Meta’s protection of Russian POWs, even under its relaxed Violence and Incitement policy, has drawn little attention but it is both laudable and notable because it applies longstanding laws of war to the wartime social media use of all combatants. It makes use of widely held norms, and represents a shift toward achievable goals for wartime moderation. Platforms cannot expect their moderation to prevent harm during wartime, but they can aim to curb certain brutalities.

The approach might be expanded to prohibit speech that calls for other violations of the laws of war. The laws of war prohibit conduct long considered immoral even in wartime. Some prohibitions are applicable to wartime social media, while others are not, so platforms will have to think carefully about how to apply them. In some cases, the laws of armed conflict pertain directly to speech, to speech and can be easily applied. One such law is the prohibition on orders or threats that no quarter will be given.3 Meta’s existing protection of POWs may amount to a prohibition on “no quarter” threats. In other cases, platforms could prohibit the advocacy or celebration of conduct that violate the laws of war, such as posts that call for violence against civilians, taking hostages, or the mutilating the dead.

These laws could serve as a north star for moderation during wartime or in conflict zones, Meta’s moderation could, to the best of its ability, ensure that combatants’ use of its platforms do not violate or promote the violation of the laws of war.

The POW policy may have already had some effect on combatants’ behavior, or at least their speech. On March 2nd, a Ukrainian Special Forces Facebook page posted a “no quarter” threat, which read, “From now on, there will be no more captive Russian artillerymen. No mercy, no “please don’t kill me, I surrender” will fly,” before quickly editing the post to remove the threat.4 The next day, the page posted footage of a captured Russian rocket artilleryman with a caption saying the soldier “made a wise decision and surrendered, coming under the Geneva Convention.”5 To the extent that social media access is important to combatants, the incentives offered by platform rules matter.

Of course, some combatants might still adopt a “no quarter” policy without publicizing it on social media. However, much of the value of this tactic stems from its publication. The same goes for other violations of the laws of war intended to scare or demoralize enemy combatants. At the very least, Meta can avoid granting reach to such announcements.

Digital Age Updates

Still, not all traditional laws of war make sense for social media. The Geneva Convention and other laws of war prohibit the mistreatment of prisoners, requiring them to be protected “against acts of violence or intimidation and against insults and public curiosity.” Public curiosity has long been taken to mean the practice of parading prisoners for public entertainment. In such situations, prisoners of war are forced to march under guard while enduring verbal and sometimes physical abuse from onlookers. However, this provision has been used to criticize platforms for allowing pictures and videos of POWs to proliferate on their services.

Twitter has taken steps to incorporate this prohibition into its community standards. In early April, the platform announced that it would require government affiliated accounts to remove media featuring prisoners of war.

To that end, we will now ask government or state affiliated media accounts to remove any media published that features prisoners of war (PoW) under our private information and media policy.

We will also add a warning interstitial to media published by government or state affiliated media accounts featuring PoWs, that has a compelling public interest.6

While this may seem like a simple application of longstanding laws of war to government accounts, parading prisoners is very different from posting pictures of them on social media. Crucially, physical parades may expose prisoners to violence, and they can only be viewed locally. On social media, however, images of prisoners are available everywhere, and can be used to verify that they are alive or in good health after being captured. This can bring comfort to their loved ones, and act as an insurance policy against later abuse or mistreatment. Thus, it is far from clear than extending the prohibition on exposing prisoners to public curiosity from parading to digital media makes sense. There may be a “compelling public internet” in almost all media depicting PoWs. While many laws of war may make sense for social media, they must be applied thoughtfully, with careful consideration of what makes the digital world different.

Conclusion

In Ukraine, Meta and other platforms have pioneered the use of the laws of war to guide wartime content moderation. The rules they have derived from the laws of war have proven a better fit for the realities of armed conflict than peacetime community standards intended to prevent harm, full stop. However, consistency demands that these emerging rules be both codified, and applied to content from both or all sides in other conflicts, such as the Tigray People’s Liberation Front post at issue in this case.