Here's how we promote safety.
Collaboratively with our users, we set the moderation principles and ground rules. It should make sense why we'd warn or hide content from people who subscribe to our service.
Users who subscribe to our service can choose to report content to us for moderation review
After receiving reports, our team of moderators will review content and determine whether or not to apply moderation labels to accounts or content.
Users who receive moderation labels from our service can appeal, and we'll be transparent and accountable for appeals.
Our list of moderation principles is ever-evolving. Engage with us on Bluesky to offer your thoughts. Subscribe and be part of building trust.
Here is our current list of moderation labels and principles.
We moderate content that is:
In general, we moderate content with a warning that can be clicked through or ignored. We may reserve right to hide content that is extremely offensive or dangerous.
Accountability is key to trust in communities.
We'll publish here statistics on our moderation decisions.