This is just an example from another startup, with a team of 16, currently taking moderation very seriously, I’ve taken the liberty to copy it on my blog because I think it’s a great example that shows the very clear intentions at Spoutible regarding racism.

Spoutible stance on moderation and racism

As a responsible platform, we are committed to providing a safe and inclusive environment for all our users. Unfortunately, yesterday we witnessed a disturbing trend of racist behavior on our platform. Specifically, a group of Nazis with racist inclinations created accounts with offensive handles and spewed racist and antisemitic rhetoric.

We take such behavior very seriously and have taken swift action to address it. We promptly suspended all the accounts that we discovered and those that were reported by our community. In addition, we have activated safety tools to prevent the creation of similar accounts in the future. Our platform has a zero-tolerance policy towards any form of racism, and we will continue to take measures to protect our users from such content.

However, we cannot do this alone. We urge our users to be vigilant and report any accounts that violate our policies immediately. Our team is available around the clock to review and take action on any reports received. Our goal is to create a platform where everyone feels safe and respected.

We would like to reiterate our stance against racism and hate speech. Such behavior goes against the fundamental values of our platform and has no place on Spoutible. We are committed to fostering a community that is inclusive, diverse, and respectful of all individuals. We thank our users for their continued support and cooperation in making our platform a welcoming space for all.

How to bake healthy moderation principles into the code ? (this is not from Spoutible)

Baking moderation into the code refers to incorporating moderation mechanisms directly into the software code of a startup’s platform to combat racism and other forms of abuse. Here are five steps that any startup can take to implement a good moderation strategy:

  • Establish clear community guidelines: Define a set of guidelines that explicitly state what constitutes racism and other forms of abuse on the platform. Make these guidelines easily accessible to all users. By setting clear expectations from the start, you create a foundation for effective moderation.

  • Implement automated content filtering: Utilize automated content filtering algorithms that can scan user-generated content in real-time. These algorithms should be designed to flag and remove content that violates the community guidelines. Machine learning techniques can be employed to continuously improve the accuracy of these filters.

  • User reporting system: Develop a user reporting system that allows users to easily report any instances of racism or abuse they encounter. Encourage users to report such incidents promptly. This system should include mechanisms to handle false reports and protect against abuse of the reporting feature.

  • Human moderation team: Employ a dedicated team of moderators who can review reported content, make judgments, and take appropriate actions. These moderators should be trained to understand the community guidelines and apply them consistently. Provide them with the necessary tools and resources to efficiently carry out their tasks.

  • Iterative improvement: Regularly evaluate the effectiveness of the moderation strategy and iterate on it based on user feedback, emerging trends, and changes in the platform’s user base. Continuously update the content filtering algorithms to adapt to evolving patterns of racism and abuse. Actively engage with users to address concerns and maintain transparency.

By implementing these steps, a startup can proactively address racism and other forms of abuse on their platform, creating a safer and more inclusive environment for their users. Baking moderation into the code ensures that moderation mechanisms are an integral part of the platform’s infrastructure, leading to more effective and efficient moderation practices.