Moderation

Here's how we promote safety.

Our moderation flow

  1. Define our moderation principles

    Collaboratively with our users, we set the moderation principles and ground rules. It should make sense why we'd warn or hide content from people who subscribe to our service.

  2. Receive reports

    Users who subscribe to our service can choose to report content to us for moderation review

  3. Deliberate and moderate

    After receiving reports, our team of moderators will review content and determine whether or not to apply moderation labels to accounts or content.

  4. Receive appeals

    Users who receive moderation labels from our service can appeal, and we'll be transparent and accountable for appeals.

Moderation Principles

Our list of moderation principles is ever-evolving. Engage with us on Bluesky to offer your thoughts. Subscribe and be part of building trust.

Here is our current list of moderation labels and principles.

We moderate content that is:

  • Abusive, such as content that engages in bullying or harassment of users.
  • Spam, such as content that engages in repeated unsolicited commercial messaging in ways that are disruptive and not more traditional advertising.
  • Misleading, such as content that is verifiable to be false and intended to generate rage or influence community discussions.
  • Botted, such as meaningless or contextless, or misleading or abusive content that is generated algorithmically and intended to manipulate discussions.
  • We also offer an optional NFSW tag for content that isn't safe for public or workplace viewing. Users that subscribe to our service must opt-in to using this label to filter.

In general, we moderate content with a warning that can be clicked through or ignored. We may reserve right to hide content that is extremely offensive or dangerous.

Accountability

Accountability is key to trust in communities.

We'll publish here statistics on our moderation decisions.