Content Moderation is an AI-powered tool that helps companies make sure that their intranet sites are clear of offensive feed posts and comments.
Content moderation engine
Simpplr's content moderation engine is an AI-powered algorithm used on every feed post, comment, and reply to ensure they aren't obscene. It's built to detect six types of objectionable content: obscenities, insults, threats, sexually explicit content, identity attacks, and severe toxicity.
Note:Content moderation is not applicable to content (pages, events, albums and blog posts), any editable profile fields, or the Q&A feature; only feed posts, comments and replies.
How content moderation works
When a feed post (home or site), comment, or reply is submitted, it goes through the content moderation engine. If the engine doesn't flag anything, it's posted as normal. If it contains content that is flagged, the poster will be notified, given the reason for the flagging, and given the option to edit the post, or continuing posting it. If posted un-edited, content is sent to the queue for moderation. The content moderator can decide whether to keep the post, or hide it. Users can also report feed posts/comments as offensive, and give a reason for the report. These are also sent to the content moderator, who will decide to keep or hide the posts.
Note:Currently content moderation does not support the Simpplr Q&A feature. However, this has been added to the product roadmap.
Enable content moderation
By default, the engine is turned off. To enable content moderation, go to Manage App > Setup > Privileges > Content moderation. Click Use content moderation. Content moderators can be added here.
Content moderation queue
Once content moderation is enabled for your organization, content moderations can view their queues by going to User menu > Content moderation and accessing the Queue tab. App managers go to Manage > Content Moderation.
Click Remove comment to remove a comment. If removed, a comment will remain visible to moderators in the analytics section of Content Moderation.
Analytics and history
The Analytics tab include a full analytics page which breaks down the number of incidents by people, sites, feeds, etc., allowing the company to find trouble areas, reach out to repeat offenders, and understand any issues that are persistent.
Analytics can be filtered by site, person, or content type.
The History tab lists a history of moderated content, the decision made on the content, and a link to the content. Content can be filtered and downloaded via CSV.
Report inappropriate content
Users can also report inappropriate content. A modal will open, prompting a reason for the report, and the content will be added to the moderators' queue.