Platform Name: Meta
Reporting Period: January 1, 2025 – December 31, 2025
Published Date: Feb 11, 2026
Overview
Meta (https://meta.discourse.org) is an online platform for users of our software to communicate with CDCK and with each other. Users are based around the world, including the European Union (EU). This transparency report is published in compliance with the Digital Services Act (DSA) to provide insights into content moderation practices, user reports, and enforcement actions taken on our platform during the reporting period.
Statistics
| Category | Number of Cases |
|---|---|
| Content flagged by users | 2216 |
| Content flagged by automation | 1648 |
| Posts deleted | 1507 |
| Posts hidden | 1289 |
| Warnings Issued | 35 |
| Accounts deleted | 1526 |
| Accounts suspended | 19 |
| Accounts silenced | 1637 |
| Government takedown orders and information requests | 0 |
| Illegal content reports by public | 0 |
| Appeals Submitted | 2 |
| Successful Appeals | 0 |
How our flagging system works
Flagging system
Users can report content using the built-in Discourse flagging system. We also use automation to automatically flag content. Flags are manually reviewed by moderators within one hour, usually faster. Moderators delete content that violates our Terms of Service.
Automated content moderation
We use the following built-in tools to automate content moderation on meta. These tools are very accurate, but moderators still verify each flag manually to catch any false positives.
Spammers are automatically flagged as they sign up and try to post or add content to their profiles. Content posted by existing members is also flagged automatically based on the following criteria:
- Watched words Posts containing words deemed not safe, illegal or spam
- Title formatting Too short, too long or poorly formatted titles are not allowed
- Domain control Certain domains or URLs are restricted
- Trust levels Restrict new users from certain actions such as creating new topics
- Rate limits How often the same user can post in a specified time window
- Post editing restrictions Setting a time window for how long after a post is created can still be edited
- AI spam detection Auto-detect potential spam and alert human moderators using artificial intelligence
Training of Moderators
We have documented procedures on handling flags, and our staff is trained on the procedures.
Appeals of Content Moderation Decisions
- Notification Users receive a notification when their content is removed or restricted.
- Right to Appeal Users can appeal moderation decisions via moderators@discourse.org. Appeals are reviewed within 7 days.
- Outcome of Appeals If an appeal is successful, the content is reinstated, and any penalties removed.
Enforcement of Terms & Community Guidelines
We enforce our Terms of Service through:
- Automated spam detection mechanisms (including AI).
- Moderator review of flagged posts.
- Sanctions including content removal, warnings, temporary suspensions, or permanent bans.
For more detail, see our content moderation guidelines at https://meta.discourse.org/t/content-moderation/296692
Future Improvements
We are continuously improving our moderation and transparency measures to ensure compliance with the DSA and maintain a safe online community.
For questions or concerns regarding this report, please contact dsa@discourse.org.