Manifest Observable Behavior: How to "draw the line" in content moderation

In July 2017, Patreon CEO Jack Conte made an unusual move. He published a YouTube video explaining an internal company decision about why he moved to ban a specific user account from its crowdfunding membership platform. The video was downvoted 10,000 times on YouTube and it received almost no mainstream press.

The user in question was a young woman named Lauren Southern, a Canadian YouTube personality, self-described journalist and active member of white nationalist group, Defend Europe. Lauren had just returned from a trip to Italy where she had blocked rescue ships from saving refugees adrift in the Mediterranean waters.

Lauren Southern, white nationalist.

Lauren Southern, white nationalist.


Yeah, you read that right. More on that in a second.

Being banned was a massive financial blow to Lauren, who took to YouTube to complain that Patreon had taken away her livelihood for no good reason and with zero evidence. She and her followers began pelting Patreon and its CEO on social media. HOW. DARE. HE.

This is the stuff of nightmares for tech CEOs, who would rather evaporate on the spot than step into the landmine that is internet politics today.

That is what makes Jack’s video so compelling. A tech CEO talking through his decision-making process, thoughtfully and in plain-spoken English? It isn’t something you see everyday.In his video explainer, Jack describes a concept called Manifest Observable Behavior, a clear-eyed, rational method for evaluating user violations. I had never heard of it before and I haven’t seen it mentioned anywhere since.Did it just slip through the cracks? This is truly one of the most brilliant ways I’ve come across to handle thorny content moderation issues and user violations — while remaining as neutral as possible.

What is Manifest Observable Behavior?

Manifest Observable Behavior is a review method that’s entirely based on observable facts:

  • What has a camera seen?

  • What has audio recorded?

  • Have they made credible statements?

“It doesn’t matter what your intentions, motivations, ideology, identity or beliefs are," says Jack. "The purpose is to remove personal values and beliefs from the teams reviewing content.”

Manifest Observable Behavior doesn’t twist itself into a pretzel to appear neutral or amoral. Rather, it’s a framework that empowers Trust & Safety teams to evenly enforce the rules and values you already have on the books. In this sense, it’s better than neutral. It enables you to keep bad actors and a range of bad behavior off your platform without waiting for them to break the law.

(If you’re waiting for your users to break the law to give them the boot, just ask Cloudflare CEO Matt Prince how that’s going for him.) 

Case Study: Lauren Southern & The Refugee Ship Stunt

"In his video, Jack breaks down how he applied Manifest Observable Behavior to evaluate Lauren’s account. Two things feature heavily: Patreon’s Acceptable Use Policy and concrete evidence. In this case, it’s all footage that Lauren had uploaded to YouTube herself.

Here’s what Jack notes:

1. Lauren directly obstructed a search & rescue ship in the Mediterranean. 

She says she was just on the boat to record the experience, Jack says. “We don’t believe that.” He cites several video & audio clips that pretty much tell the real story, including Lauren:

  • Being on the boat

  • Telling the boat operator where to go to cut off the NGO vessel

  • Detailing her intention of blocking a rescue ship (in selfie mode)

  • Telling the boat operator that the NGO vessel will have to stop, by law, if they’re in front of it

  • Instructing the operator to get in front of the ship (“Get in front, Thomas!”)

  • Confirming that the boat has stopped

This is all objectively horrifying on a human level. But Jack doesn’t fall back on morals or ethics or Philosophy 101 talk here. Instead, he points to a section of Patreon’s Acceptable Use Policy that prohibits creators from threatening to take or actually taking action that could lead to harm or loss of life.

2. Lauren made credible plans to obstruct future rescue ships in the future.

But that’s not all. Lauren also had plans to do it again. Jack cites this as another nail in the coffin for her Patreon account. Again, he cites concrete examples:

  • In an interview with an online publication, Lauren pledged to continue stopping rescue ships (“If the politicians won’t stop the boats, we’ll stop the boats…and we’ll be back with more boats.”)

  • Successfully raised funds to buy another boat

  • Made videos showing off their new boat, Sea Star

With this mountain of evidence against Lauren, Jack had what he needed to move forward with a ban.

But what if she was just kidding?

A favorite tactic used by the white nationalist & neo-Nazi community is to throw up their hands and say, “But we were just kidding!”

It’s the wrench designed to trip up content moderators and paralyze the decision-making process. If they’re just kidding or employing satire, are we infringing on their right to free speech? We can't let people not make jokes.

For what it's worth, weaponizing irony is one of the oldest tricks in the book. Jack has a plan for that too: Patreon’s policy differentiates between credible and non-credible behavior. In other words, they account for users being full of shit, by prioritizing what the user does over what the user says.

For example, Lauren may claim that she was on the boat just to record everything, but her actions and statements to the press show that she was credibly planning to participate. Lauren may claim she was not trying to obstruct the ship, but she is on record doing exactly that.

What if the boat had stayed on the side of the ship instead of obstructing it? That would be a different story, says Jack. He would have interpreted that as them exercising their right to free speech and wouldn't have dinged her account.

What we need is transparency

Jack's video explainer - posted in July 2017 - was both gutsy and ahead of its time. “A ton of people are going to be angry and try to find holes in our logic," he says to the camera. "We’re going to start a conversation that most companies just want to completely avoid,” He was right. 

The tech industry is terrified to talk about common sense value judgments, even if it would help us respond quickly to bad behavior on our platforms.

That's why I believe Jack's experiment with transparency contains a blueprint for the future of content moderation. While other tech CEOs pledged to remain neutral, Jack drew a line in the sand and explained exactly what crossing it looks like.

Whether it's racism, harassment or a nothing-that-will-kill-anybody clause, having an unambiguous acceptable use policy and transparent review method allows you to be honest about what is and isn't permitted on your platform.

And with a pipeline of bad actors causing social media crises for tech companies all the time now, you're going to need a better way to respond.

Patreon's already onboard. Who's next? 

Previous
Previous

How To Ensure Brand Safety in 2020: The Big, Crazy Election Year Edition

Next
Next

The best career advice for advice-seekers I've ever heard