■
Overview
Content Moderation is an AI-powered tool that helps companies make sure that their intranet sites are clear of offensive feed posts and comments.
Content moderation engine
Simpplr's content moderation engine is an AI-powered algorithm used on every feed post, comment, and reply to ensure they aren't obscene. It's built to detect six types of objectionable content: obscenities, insults, threats, sexually explicit content, identity attacks, and severe toxicity.
Note:
Content moderation is not applicable to content (pages, events, albums and blog posts), or the Q&A feature; only feed posts, comments and replies.How content moderation works
When a feed post (home or site), comment, or reply is submitted, it goes through the content moderation engine. If the engine doesn't flag anything, it's posted as normal. If it contains content that is flagged, the poster will be notified, given the reason for the flagging, and given the option to edit the post, or continuing posting it. If posted un-edited, content is sent to the queue for moderation. The content moderator can decide whether to keep the post, or hide it. Users can also report feed posts/comments as offensive, and give a reason for the report. These are also sent to the content moderator, who will decide to keep or hide the posts.
Note:
Currently content moderation does not support the Simpplr Q&A feature. However, this has been added to the product roadmap.Enable content moderation
By default, the engine is turned off. To enable content moderation, go to Manage App > Setup > Privileges > Content moderation. Click Use content moderation. Content moderators can be added here.
Content moderation queue
Once content moderation is enabled for your organization, content moderations can view their queues by going to User menu > Content moderation and accessing the Queue tab. App managers go to Manage > Content Moderation.
Click Remove comment to remove a comment. If removed, a comment will remain visible to moderators in the analytics section of Content Moderation.
Analytics and history
The Analytics tab include a full analytics page which breaks down the number of incidents by people, sites, feeds, etc., allowing the company to find trouble areas, reach out to repeat offenders, and understand any issues that are persistent.
Analytics can be filtered by site, person, or content type.
The History tab lists a history of moderated content, the decision made on the content, and a link to the content. Content can be filtered and downloaded via CSV.
Report inappropriate content
Users can also report inappropriate content. A modal will open, prompting a reason for the report, and the content will be added to the moderators' queue.
Comments
Do app managers/content moderators receive an actionable notification in-app at the time the content is flagged/reported or do they need to go to the Content moderation queue to see new items in the queue?
Hi Betsy. Great question. Currently there are no notifications sent to App or Content managers regarding reported posts. They'll need to go to the content moderation queue in order to see any flagged content.
Hi, Matthew,
Just to clarify your answer to Betsy's question. App managers/content managers don't receive ANY notifications of new items in the content queue? Whether or not it's due to a person posting flagged content or due to a person reporting someone else content? The only way we'd know about is to set ourselves reminders to check the queue?
Michelle
Michelle, yes, that's correct. As of now there are no notifications sent to the management team regarding new content flagged for moderation. There is an enhancement request submitted with our Product team to get this feature added though.
Great! Glad to hear an enhancement request has been submitted :)
Please sign in to leave a comment.