Content moderation

Table of contents


Content moderation engine

How does it work?

Enable content moderation


Analytics and history

Reporting inappropriate content



Content Moderation is an AI-powered tool that helps companies make sure that their intranet sites are clear of offensive feed posts and comments.

Content moderation engine

Simpplr's content moderation engine is an AI-powered algorithm used on every feed post, comment, and reply to ensure they aren't obscene. It's built to detect these types of objectionable content:

Content type: Flagged in Simpplr as:
hate hateful content
harassment harassment
a threat
self-harm related content
sexually explicit content
violent content



Content moderation is not applicable to content (pages, events, albums and blog posts), any editable profile fields, or the Q&A feature; only feed posts, comments and replies. As of now, content moderation will only scan about the first 500 words (or one page of text) in any given post. 

How content moderation works

When a feed post (home or site), comment, or reply is submitted, it goes through the content moderation engine. If the engine doesn't flag anything, it's posted as normal. If it contains content that is flagged, the poster will be notified, given the reason for the flagging, and given the option to edit the post, or continuing posting it. If posted un-edited, content is sent to the queue for moderation. The content moderator can decide whether to keep the post, or hide it. Users can also report feed posts/comments as offensive, and give a reason for the report. These are also sent to the content moderator, who will decide to keep or hide the posts. 



Currently content moderation does not support the Simpplr Q&A feature. However, this has been added to the product roadmap. 

Enable content moderation

By default, the engine is turned off. To enable content moderation, go to Manage > Application > Setup > Privileges > Content moderation. Click Use content moderation. Content moderators can be added here. 


Content moderation queue

Once content moderation is enabled for your organization, content moderations can view their queues by going to your User menu > Content moderation and accessing the Queue tab. App managers go to Manage > Content moderation.


Click Remove comment to remove a comment. If removed, a comment will remain visible to moderators in the analytics section of Content Moderation.



Analytics and history

The Analytics tab include a full analytics page which breaks down the number of incidents by people, sites, feeds, etc., allowing the company to find trouble areas, reach out to repeat offenders, and understand any issues that are persistent.

Analytics can be filtered by site, person, or content type. 


The History tab lists a history of moderated content, the decision made on the content, and a link to the content. Content can be filtered and downloaded via CSV.


Report inappropriate content

Users can also report inappropriate content. A modal will open, prompting a reason for the report, and the content will be added to the moderators' queue. 




Currently, content moderation supports any content written in the following languages: 

  • English
  • Spanish
  • Danish
  • German
  • French
  • Portuguese
  • Italian
Was this article helpful?
1 out of 1 found this helpful
Have more questions? Submit a request


  • Do app managers/content moderators receive an actionable notification in-app at the time the content is flagged/reported or do they need to go to the Content moderation queue to see new items in the queue?

    Comment actions Permalink
  • Hi Betsy. Great question. Currently there are no notifications sent to App or Content managers regarding reported posts. They'll need to go to the content moderation queue in order to see any flagged content.

    Comment actions Permalink
  • Hi, Matthew,

    Just to clarify your answer to Betsy's question. App managers/content managers don't receive ANY notifications of new items in the content queue? Whether or not it's due to a person posting flagged content or due to a person reporting someone else content? The only way we'd know about is to set ourselves reminders to check the queue?


    Comment actions Permalink
  • Michelle, yes, that's correct. As of now there are no notifications sent to the management team regarding new content flagged for moderation. There is an enhancement request submitted with our Product team to get this feature added though.

    Comment actions Permalink
  • Great! Glad to hear an enhancement request has been submitted :)

    Comment actions Permalink
  • For content moderation, in addition to what the system already has as prerequisites for flagging content, can we add terms or modify what is available and considered objectionable content?

    Comment actions Permalink
  • Hi Rob. There is no way to add custom terms and phrases to be flagged for content moderation. The feature is AI powered, and takes context into consideration.

    Comment actions Permalink
  • Hello, I am wondering if the latest releases/updates have included the notification for content moderation? Thank you.

    Comment actions Permalink
  • Hi Alyssa. Content moderation notifications are not currently available, but are expected to be rolled out in the next quarter. 

    Comment actions Permalink

Please sign in to leave a comment.

Articles in this section

See more