<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>social media &#8211; Online Moderation</title>
	<atom:link href="https://www.onlinemoderation.com/tag/social-media/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.onlinemoderation.com</link>
	<description>Social Media Management Services &#38; Content Moderation That Flex With Your Needs</description>
	<lastBuildDate>Wed, 19 Jun 2019 18:28:22 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	
	<item>
		<title>Facebook Doubles Down on Content Moderators</title>
		<link>https://www.onlinemoderation.com/facebook-doubles-content-moderators/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=facebook-doubles-content-moderators</link>
		
		<dc:creator><![CDATA[Mike Merriman]]></dc:creator>
		<pubDate>Fri, 05 May 2017 13:37:14 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Content Moderation]]></category>
		<category><![CDATA[Facebook]]></category>
		<category><![CDATA[online moderator]]></category>
		<category><![CDATA[social media]]></category>
		<guid isPermaLink="false">https://www.onlinemoderation.com/?p=1608</guid>

					<description><![CDATA[<p>Unless you’ve been hiding under a rock, you’ve heard the stories of murders and suicides posted on Facebook.  In order to keep that type of content away from the public, Facebook has just announced that they will hire an additional 3000 content moderators.  This is on top of approximately 4500 current moderators.   In a recent post, [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://www.onlinemoderation.com/facebook-doubles-content-moderators/">Facebook Doubles Down on Content Moderators</a> appeared first on <a rel="nofollow" href="https://www.onlinemoderation.com">Online Moderation</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p><img fetchpriority="high" decoding="async" class="alignleft size-medium wp-image-1587" src="https://www.onlinemoderation.com/wp-content/uploads/FB-f-Logo__blue_1024-300x300.png" alt="" width="300" height="300" />Unless you’ve been hiding under a rock, you’ve heard the stories of murders and suicides posted on Facebook.  In order to keep that type of content away from the public, Facebook has just announced that they will <a href="https://arstechnica.com/tech-policy/2017/05/facebook-promises-to-hire-3000-people-to-moderate-content/" target="_blank" rel="noopener noreferrer">hire an additional 3000 content moderators</a>.  This is on top of approximately 4500 current moderators.   In <a href="https://www.facebook.com/zuck/posts/10103695315624661" target="_blank" rel="noopener noreferrer">a recent post</a>, Mark Zuckerberg said the content moderators will, &#8220;help us get better at removing things we don&#8217;t allow on Facebook like hate speech and child exploitation.&#8221; Additionally he said they’ll work with Law Enforcement to help users, &#8220;because they&#8217;re about to harm themselves, or because they&#8217;re in danger from someone else.&#8221;</p>
<p>It’s unclear whether Facebook will hire people or outsource this work.  Nor is there any indication where in the world these additional resources will reside.  One thing however is certain – with over one billion active Facebook users, and over 5 billion pieces of content shared daily, this is a huge and necessary job.  We at Mzinga strongly believe in protecting the public and children specifically, and commend Facebook for the attention they are giving this issue.</p>
<p>The post <a rel="nofollow" href="https://www.onlinemoderation.com/facebook-doubles-content-moderators/">Facebook Doubles Down on Content Moderators</a> appeared first on <a rel="nofollow" href="https://www.onlinemoderation.com">Online Moderation</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Social Media Crisis?  Take ownership of the issue.</title>
		<link>https://www.onlinemoderation.com/social-media-crisis-take-onwership-issue/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=social-media-crisis-take-onwership-issue</link>
		
		<dc:creator><![CDATA[Mike Merriman]]></dc:creator>
		<pubDate>Mon, 01 May 2017 14:51:02 +0000</pubDate>
				<category><![CDATA[Reputation Management]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[crisis management]]></category>
		<category><![CDATA[social media]]></category>
		<guid isPermaLink="false">https://www.onlinemoderation.com/?p=1580</guid>

					<description><![CDATA[<p>Yup &#8211; another airline incident caught on film. This time it was American Airlines flight 591 #AA591. While the incident was deplorable, the good news is  that American Airlines got out in front of the backlash.  They took ownership of the incident and defused any impending actions by the public.  See their almost immediate response. [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://www.onlinemoderation.com/social-media-crisis-take-onwership-issue/">Social Media Crisis?  Take ownership of the issue.</a> appeared first on <a rel="nofollow" href="https://www.onlinemoderation.com">Online Moderation</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Yup &#8211; another airline incident caught on film. This time it was American Airlines flight 591 <a href="https://twitter.com/hashtag/AA591" target="_blank" rel="noopener noreferrer">#AA591</a>. While the incident was deplorable, the good news is  that American Airlines got out in front of the backlash.  They took ownership of the incident and defused any impending actions by the public.  See their <a href="http://news.aa.com/press-releases/press-release-details/2017/American-Airlines-comment-on-Flight-591/default.aspx" target="_blank" rel="noopener noreferrer">almost immediate response</a>.  &#8220;We have seen the video and have already started an investigation to obtain the facts. What we see on this video does not reflect our values or how we care for our customers. We are deeply sorry for the pain we have caused this passenger and her family and to any other customers affected by the incident. We are making sure all of her family&#8217;s needs are being met while she is in our care. After electing to take another flight, we are taking special care of her and her family and upgrading them to first class for the remainder of their international trip.</p>
<p>The actions of our team member captured here do not appear to reflect patience or empathy, two values necessary for customer care. In short, we are disappointed by these actions. The American team member has been removed from duty while we immediately investigate this incident.&#8221;</p>
<p>Do you have a Social Media Crisis Plan?  <a href="https://www.onlinemoderation.com/crisis-management/" target="_blank" rel="noopener noreferrer">Let Mzinga help.</a></p>
<p>The post <a rel="nofollow" href="https://www.onlinemoderation.com/social-media-crisis-take-onwership-issue/">Social Media Crisis?  Take ownership of the issue.</a> appeared first on <a rel="nofollow" href="https://www.onlinemoderation.com">Online Moderation</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>The first 24 hours in a social media crisis are critical &#8211; is United Airlines listening?</title>
		<link>https://www.onlinemoderation.com/first-24-hours-social-media-crisis-critical-united-airlines/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=first-24-hours-social-media-crisis-critical-united-airlines</link>
		
		<dc:creator><![CDATA[Mike Merriman]]></dc:creator>
		<pubDate>Mon, 10 Apr 2017 16:35:22 +0000</pubDate>
				<category><![CDATA[Content Moderation]]></category>
		<category><![CDATA[Reputation Management]]></category>
		<category><![CDATA[Social Listening]]></category>
		<category><![CDATA[crisis management]]></category>
		<category><![CDATA[social media]]></category>
		<guid isPermaLink="false">https://www.onlinemoderation.com/?p=1337</guid>

					<description><![CDATA[<p>The first headline I saw in my newsfeed today was “Bloodied Passenger Dragged From United Flight At O&#8217;Hare Airport.&#8221;  Upon reading the story in numerous international publications, I checked various social media channels to learn more about the incident.  Not surprisingly #BoycottUnited and #Flight3411 are trending hashtags – and the story doesn’t get better for [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://www.onlinemoderation.com/first-24-hours-social-media-crisis-critical-united-airlines/">The first 24 hours in a social media crisis are critical &#8211; is United Airlines listening?</a> appeared first on <a rel="nofollow" href="https://www.onlinemoderation.com">Online Moderation</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>The first headline I saw in my newsfeed today was “Bloodied Passenger Dragged From United Flight At O&#8217;Hare Airport.&#8221;  Upon reading the <a href="https://www.boston.com/news/national-news/2017/04/10/video-shows-guards-dragging-passenger-off-united-flight" target="_blank" rel="noopener noreferrer">story</a> in numerous international publications, I checked various social media channels to learn more about the incident.  Not surprisingly <a href="https://twitter.com/hashtag/boycottunited" target="_blank" rel="noopener noreferrer">#BoycottUnited</a> and <a href="https://twitter.com/hashtag/flight3411?src=hash" target="_blank" rel="noopener noreferrer">#Flight3411</a> are trending hashtags – and the story doesn’t get better for United the more I read.  After last week’s <a href="https://twitter.com/hashtag/leggingsgate?src=hash" target="_blank" rel="noopener noreferrer">#leggingsgate</a> incident I fully expected to see a concerted effort by United’s team to deal with the chatter.  Listen to all the experts – the first 24 hours in a social media crisis are critical.  Communications in this phase are the number one priority.  A sound communications strategy starts with acknowledging the incident occurred.  Next step – apologize – without pointing fingers at people or policy.  Finally develop and commit to a plan of action – which may involve holding employees or partners accountable, and changing policies and procedures.  The goal here is to understand what happened, prevent it from happening again, and most importantly keep your customers loyal.  There’s plenty of noise on social media now calling for United’s head. United has chosen to defend their position with “We apologize for the overbook situation.”   This position doesn’t acknowledge the results of the incident, and while they use the word – isn’t an apology.</p>
<p>Don’t be United.  Develop a sound Social Media Crisis plan in the event you ever need it.  <a href="https://www.onlinemoderation.com/get-started/" target="_blank" rel="noopener noreferrer">Mzinga can help</a>.</p>
<h1></h1>
<p>The post <a rel="nofollow" href="https://www.onlinemoderation.com/first-24-hours-social-media-crisis-critical-united-airlines/">The first 24 hours in a social media crisis are critical &#8211; is United Airlines listening?</a> appeared first on <a rel="nofollow" href="https://www.onlinemoderation.com">Online Moderation</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>United Airlines in a Social Media Crisis as Twitter blows up over (gasp!!) spandex.</title>
		<link>https://www.onlinemoderation.com/united-airlines-in-a-social-media-crisis-as-twitter-blows-up-over-gasp-spandex/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=united-airlines-in-a-social-media-crisis-as-twitter-blows-up-over-gasp-spandex</link>
		
		<dc:creator><![CDATA[Mike Merriman]]></dc:creator>
		<pubDate>Mon, 27 Mar 2017 16:49:37 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Content Moderation]]></category>
		<category><![CDATA[social media]]></category>
		<guid isPermaLink="false">https://www.onlinemoderation.com/?p=1245</guid>

					<description><![CDATA[<p>United Airlines made headlines over the weekend as a gate agent refused to allow two young girls to board a flight from Denver to Minneapolis because they were “not properly clothed”.  The infraction – spandex leggings. The young girls donned other clothing and boarded the plane.  The incident was witnessed by Shannon Watts (https://twitter.com/shannonrwatts) founder [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://www.onlinemoderation.com/united-airlines-in-a-social-media-crisis-as-twitter-blows-up-over-gasp-spandex/">United Airlines in a Social Media Crisis as Twitter blows up over (gasp!!) spandex.</a> appeared first on <a rel="nofollow" href="https://www.onlinemoderation.com">Online Moderation</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>United Airlines made headlines over the weekend as a gate agent refused to allow two young girls to board a flight from Denver to Minneapolis because they were “not properly clothed”.  The infraction – spandex leggings. The young girls donned other clothing and boarded the plane.  The incident was witnessed by Shannon Watts (<a href="https://twitter.com/shannonrwatts">https://twitter.com/shannonrwatts</a>) founder of <a href="https://t.co/gD9sLfuWHn">Moms Demand Action</a> who pointed out the hypocrisy of allowing men in shorts to board while preventing girls in leggings the same right.  <a href="https://twitter.com/united/status/845999380024836097">Twitter</a> has blown up over this event and drawn many people and activists into the conversation as United has attempted to clarify their position on the event.  In short – the girls were not paying customers, but family of United employees flying as “pass fares” and subject to a stricter dress code than the travelling public – which is rather vaguely termed “barefoot or not properly clothed.”  Regardless of the facts of the event, United’s actions have brought numerous comments calling for boycott and alleging sexism.  To United’s credit they seem to have well defined social media crisis plan – but their position of “<a href="https://hub.united.com/our-customers-leggings-are-welcome-2331263786.html">we’ve done nothing wrong”</a> seems just to be making the hole bigger.</p>
<p>The post <a rel="nofollow" href="https://www.onlinemoderation.com/united-airlines-in-a-social-media-crisis-as-twitter-blows-up-over-gasp-spandex/">United Airlines in a Social Media Crisis as Twitter blows up over (gasp!!) spandex.</a> appeared first on <a rel="nofollow" href="https://www.onlinemoderation.com">Online Moderation</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Hate Speech, Free Speech, or a business decision?</title>
		<link>https://www.onlinemoderation.com/hate-speech-free-speech-business-decision/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=hate-speech-free-speech-business-decision</link>
		
		<dc:creator><![CDATA[Mike Merriman]]></dc:creator>
		<pubDate>Tue, 21 Mar 2017 18:51:29 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Content Moderation]]></category>
		<category><![CDATA[hate speech]]></category>
		<category><![CDATA[social media]]></category>
		<guid isPermaLink="false">http://onlinemoderation.com/?p=1175</guid>

					<description><![CDATA[<p>Many large social media channels are coming under fire for not being “tough on hate speech.”  Refer to this recent article in The Guardian – Face-off between MPs and social media giants over online hate speech.  In the US, while hate crimes are illegal, the constitution gives a great deal of latitude to hate speech [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://www.onlinemoderation.com/hate-speech-free-speech-business-decision/">Hate Speech, Free Speech, or a business decision?</a> appeared first on <a rel="nofollow" href="https://www.onlinemoderation.com">Online Moderation</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Many large social media channels are coming under fire for not being “tough on hate speech.”  Refer to this recent article in The Guardian – <a href="https://www.theguardian.com/media/2017/mar/14/face-off-mps-and-social-media-giants-online-hate-speech-facebook-twitter" target="_blank" rel="noopener noreferrer">Face-off between MPs and social media giants over online hate speech.</a>  In the US, while hate crimes are illegal, the constitution gives a great deal of latitude to hate speech except where it transcends into personal attacks and “fighting words.”   The <a href="https://www.washingtonpost.com/news/volokh-conspiracy/wp/2015/05/07/no-theres-no-hate-speech-exception-to-the-first-amendment/?utm_term=.6a333b40b812" target="_blank" rel="noopener noreferrer">Washington Post</a> has a great piece on this outline the law and relevant statutes.</p>
<p>One embeddable commenting system, Disqus, is <a href="https://news.fastcompany.com/disqus-where-toxic-breitbart-comments-live-could-be-next-on-the-boycott-list-4031222" target="_blank" rel="noopener noreferrer">facing massive boycotts</a> due to the fact that they power the commenting on Breitbart and many other website forums.  Is there an easy answer to this?  I’m sure there are two dissenting easy answers depending on where you stand.  Where this is no law or statute prohibiting hate speech in commenting systems and forums, the forum owners must define their own policies based on business impacts.  Additionally, vendors such as Disqus need to determine the potential impacts of who they do business with.</p>
<p>The post <a rel="nofollow" href="https://www.onlinemoderation.com/hate-speech-free-speech-business-decision/">Hate Speech, Free Speech, or a business decision?</a> appeared first on <a rel="nofollow" href="https://www.onlinemoderation.com">Online Moderation</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Online Moderators Keep it Civil, But What About Where They Work?</title>
		<link>https://www.onlinemoderation.com/online-moderators-keep-civil-work/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=online-moderators-keep-civil-work</link>
		
		<dc:creator><![CDATA[Mzinga Moderators]]></dc:creator>
		<pubDate>Mon, 13 Mar 2017 14:21:22 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Content Moderation]]></category>
		<category><![CDATA[online moderator]]></category>
		<category><![CDATA[social media]]></category>
		<guid isPermaLink="false">http://onlinemoderation.com/?p=1172</guid>

					<description><![CDATA[<p>Online Moderators Keep it Civil, But What About Where They Work? Mzinga moderators spend much of their shifts putting an end to flame wars, banning trolls, handling customer complaints, and keeping the peace.  As Mzinga’s Director of Moderation Services, I ensure that the team works in an environment that encourages and practices civil interaction as [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://www.onlinemoderation.com/online-moderators-keep-civil-work/">Online Moderators Keep it Civil, But What About Where They Work?</a> appeared first on <a rel="nofollow" href="https://www.onlinemoderation.com">Online Moderation</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Online Moderators Keep it Civil, But What About Where They Work?</p>
<p>Mzinga moderators spend much of their shifts putting an end to flame wars, banning trolls, handling customer complaints, and keeping the peace.  As Mzinga’s Director of Moderation Services, I ensure that the team works in an environment that encourages and practices civil interaction as well.</p>
<p>To produce harmonious workplace conditions, the consulting firm <a href="http://civilitypartners.com/" target="_blank" rel="noopener noreferrer">Civility Partners</a> has established the following guidelines for teams, whether they work together in an office or virtually.  Teams should especially avoid:</p>
<p>&#8212;  Aggressive Communication (includes insults or offensive remarks, angry outbursts, avoidance,  offensive written communications, and blaming someone for issues not their fault or they have no control over)</p>
<p>&#8212; Humiliation (includes ridiculing or teasing, spreading gossip, taunting (in person or writing), publicly pointing out mistakes or mistakes that have been corrected, and snubbing for having a different interpretation of a company policy or management style)</p>
<p>&#8212; Manipulation of Work (includes subverting tasks associated with a person&#8217;s job responsibilities, unmanageable workloads and impossible deadlines, making general statements about poor performance without offering assistance to correct it, and leaving a person out of the correspondence and meeting loop)</p>
<p>Behaviors that contribute to workplace civility are respect, support, encouragement, politeness, openness, appreciation, trust, sensitivity, sincerity, having a positive attitude, taking pride in what you do, and being a good example.</p>
<p>Each company should have an established Company and Management Commitment to Civility that ensures their workers are free from negative, aggressive, and inappropriate behaviors and that the workplace will provide an atmosphere of respect, collaboration, openness, safety, and equality; where complaints about negative workplace behaviors are taken seriously and followed-through to resolution.  Every employee, from the CEO to the intern, is given a copy and a signed copy is part of their employee file.  Larger companies will have a training module as part of new employee orientation.</p>
<p>Online moderators keep their client&#8217;s sites free from risks that run from bad publicity to legal liability.  As a result, the burnout rate is high (see my blog entry from a couple of weeks ago about two Microsoft moderators who say they are permanently disabled from moderating disturbing images).  At Mzinga, the moderation team is able promote civil interaction because it is practiced in their workplace.</p>
<p>The post <a rel="nofollow" href="https://www.onlinemoderation.com/online-moderators-keep-civil-work/">Online Moderators Keep it Civil, But What About Where They Work?</a> appeared first on <a rel="nofollow" href="https://www.onlinemoderation.com">Online Moderation</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Facebook Adds AI to Suicide Prevention Arsenal</title>
		<link>https://www.onlinemoderation.com/facebook-adds-ai-suicide-prevention-arsenal/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=facebook-adds-ai-suicide-prevention-arsenal</link>
		
		<dc:creator><![CDATA[Mzinga Moderators]]></dc:creator>
		<pubDate>Mon, 06 Mar 2017 14:30:31 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Facebook]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[Suicide Prevention]]></category>
		<guid isPermaLink="false">http://onlinemoderation.com/?p=1168</guid>

					<description><![CDATA[<p>Facebook Adds AI to Suicide Prevention Arsenal More than ten years ago, I complimented Facebook for encouraging its members to send in a report if they saw a post by a member or friend saying they were serious about harming themselves.  If a report was received, Facebook contacted the member with a message of concern, [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://www.onlinemoderation.com/facebook-adds-ai-suicide-prevention-arsenal/">Facebook Adds AI to Suicide Prevention Arsenal</a> appeared first on <a rel="nofollow" href="https://www.onlinemoderation.com">Online Moderation</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Facebook Adds AI to Suicide Prevention Arsenal</p>
<p>More than ten years ago, I complimented Facebook for encouraging its members to send in a report if they saw a post by a member or friend saying they were serious about harming themselves.  If a report was received, Facebook contacted the member with a message of concern, along with a list of resources for getting support.</p>
<p>Recently, Facebook <a href="http://newsroom.fb.com/news/2017/03/building-a-safer-community-with-new-suicide-prevention-tools/">announced</a> updated resource tools, as well as the use of artificial technology (AI), to offer a more rapid assist to those who may be contemplating suicide.  In addition to an improved support system that is now available on Facebook Live, text and videos are analyzed for content indicating that the member may be considering suicide.</p>
<p>If the software reads triggering words and phrases that indicate a member is at risk, the Facebook Community Operations team is notified.  They will send a message of support and suggest ways the member can seek help if they need it.</p>
<p>Have we seen the last suicide on Facebook Live?  Probably not, but instead of using AI to combat online trolling behavior (i.e. Google’s Perspective), saving lives is a much more real-world and effective use of the technology.</p>
<p>The post <a rel="nofollow" href="https://www.onlinemoderation.com/facebook-adds-ai-suicide-prevention-arsenal/">Facebook Adds AI to Suicide Prevention Arsenal</a> appeared first on <a rel="nofollow" href="https://www.onlinemoderation.com">Online Moderation</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Google’s Jigsaw Gets a New Perspective: Learning Curve Still a Hairpin</title>
		<link>https://www.onlinemoderation.com/googles-jigsaw-gets-new-perspective-learning-curve-still-hairpin/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=googles-jigsaw-gets-new-perspective-learning-curve-still-hairpin</link>
		
		<dc:creator><![CDATA[Mzinga Moderators]]></dc:creator>
		<pubDate>Mon, 27 Feb 2017 14:52:21 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Content Moderation]]></category>
		<category><![CDATA[social media]]></category>
		<guid isPermaLink="false">http://onlinemoderation.com/?p=1164</guid>

					<description><![CDATA[<p>Google’s Jigsaw Gets a New Perspective: Learning Curve Still a Hairpin Late last year, I commented on Google’s Jigsaw software, created to apply machine learning to detect and remove harassment and abusive content in areas where users interact online.  At the time, I said that no matter how much Jigsaw learned, it would never be [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://www.onlinemoderation.com/googles-jigsaw-gets-new-perspective-learning-curve-still-hairpin/">Google’s Jigsaw Gets a New Perspective: Learning Curve Still a Hairpin</a> appeared first on <a rel="nofollow" href="https://www.onlinemoderation.com">Online Moderation</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Google’s Jigsaw Gets a New Perspective: Learning Curve Still a Hairpin</p>
<p>Late last year, I commented on Google’s Jigsaw software, created to apply machine learning to detect and remove harassment and abusive content in areas where users interact online.  At the time, I said that no matter how much Jigsaw learned, it would never be smart enough to replace human moderators.</p>
<p>Recently, Google’s Counter Abuse Technology Team released Perspective, the newest Jigsaw tool.  It’s an API that allows users to tap Jigsaw’s library of millions of words and phrases and determine a message’s “toxicity.”  Perspective scans each message and produces a toxicity rating as a percent, based on what panels of users have thought of similar comments.  Each comment, for example, is rated as “8 percent similar to phrases people said were “toxic.””</p>
<p>While several sites, such as the New York Times, are giving it a try (<a href="http://www.perspectiveapi.com/" target="_blank" rel="noopener noreferrer">you can try it as well at the Perspective Demo Site</a>), there are many that believe that, while helpful, it will never be used more than as a first pass to flag content for subsequent human review.  Perspective advances the learning curve of using Artificial Intelligence to combat online trolling, but it also further illustrates the continuing value of human moderators.</p>
<p>The post <a rel="nofollow" href="https://www.onlinemoderation.com/googles-jigsaw-gets-new-perspective-learning-curve-still-hairpin/">Google’s Jigsaw Gets a New Perspective: Learning Curve Still a Hairpin</a> appeared first on <a rel="nofollow" href="https://www.onlinemoderation.com">Online Moderation</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>25 Percent of Canadians Are Harassed on Social Media</title>
		<link>https://www.onlinemoderation.com/25-percent-canadians-harassed-social-media/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=25-percent-canadians-harassed-social-media</link>
		
		<dc:creator><![CDATA[Mzinga Moderators]]></dc:creator>
		<pubDate>Mon, 24 Oct 2016 13:08:12 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Bullying]]></category>
		<category><![CDATA[social media]]></category>
		<guid isPermaLink="false">http://onlinemoderation.com/?p=1053</guid>

					<description><![CDATA[<p>25 Percent of Canadians Are Harassed on Social Media A recent survey by the Angus Reid Public Interest Research Institute found that one-fourth of Canadians are subjected to “unwelcome comments, vicious insults, threats of violence, or worse,” with the bulk of abuse coming from Facebook and Twitter.  The number rises among frequent and younger users. [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://www.onlinemoderation.com/25-percent-canadians-harassed-social-media/">25 Percent of Canadians Are Harassed on Social Media</a> appeared first on <a rel="nofollow" href="https://www.onlinemoderation.com">Online Moderation</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>25 Percent of Canadians Are Harassed on Social Media</p>
<p>A recent <a href="http://angusreid.org/social-media/">survey</a> by the <a href="http://angusreid.org/">Angus Reid Public Interest Research Institute</a> found that one-fourth of Canadians are subjected to “unwelcome comments, vicious insults, threats of violence, or worse,” with the bulk of abuse coming from Facebook and Twitter.  The number rises among frequent and younger users.</p>
<p>In response, users have become reluctant to engage in debate with those holding contrary opinions.  Instead, they hold back and self-censor themselves when it comes to controversial topics.  The Institute also found that Canadians believe Internet companies aren’t doing enough to curb harassment and personal attacks.</p>
<p>When queried about where the companies are falling short, users said that they weren’t responsive to complaints and requests for assistance, and they weren’t performing sufficient moderation tasks:  finding and removing offensive content in a timely manner.  When given removal policy choices, most wanted the companies to “get tough” by regularly monitoring and moderating interactive areas and actively removing content (and those that post it) that does not follow the rules of conduct.</p>
<p>If a company does not have internal resources to moderate areas where users are allowed to post messages and comments, the best alternative is to hire a vendor who specializes in online moderation.</p>
<p>One such company is Mzinga, who for more than 25 years has been keeping companies (and their users) safe from harassment, offensive personal attacks, and other abuses.  We are also able to triage requests to customer service: providing assistance, answering questions, and routing specific requests to the proper divisions of the company.</p>
<p>Your users and customers are demanding that you take tough action to keep them safe.  Why not take the first step: give us a call and let us show you how Mzinga moderation and customer service triage can revive civility?</p>
<p>The post <a rel="nofollow" href="https://www.onlinemoderation.com/25-percent-canadians-harassed-social-media/">25 Percent of Canadians Are Harassed on Social Media</a> appeared first on <a rel="nofollow" href="https://www.onlinemoderation.com">Online Moderation</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Amazon Bans Giving Free Books to Reviewers</title>
		<link>https://www.onlinemoderation.com/amazon-bans-giving-free-books-reviewers/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=amazon-bans-giving-free-books-reviewers</link>
		
		<dc:creator><![CDATA[Mzinga Moderators]]></dc:creator>
		<pubDate>Tue, 11 Oct 2016 14:37:26 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Amazon]]></category>
		<category><![CDATA[Product Reviews]]></category>
		<category><![CDATA[social media]]></category>
		<guid isPermaLink="false">http://onlinemoderation.com/?p=1045</guid>

					<description><![CDATA[<p>Amazon Bans Giving Free Books to Reviewers Last week, Amazon ended it years-old policy of hooking up its reviewers with free or deeply discounted products, which many had complained skewed the ratings into more positive numbers of stars, causing them to appear further towards the top of ratings lists by as many as two stars. [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://www.onlinemoderation.com/amazon-bans-giving-free-books-reviewers/">Amazon Bans Giving Free Books to Reviewers</a> appeared first on <a rel="nofollow" href="https://www.onlinemoderation.com">Online Moderation</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Amazon Bans Giving Free Books to Reviewers</p>
<p>Last week, Amazon ended it years-old policy of hooking up its reviewers with free or deeply discounted products, which many had complained skewed the ratings into more positive numbers of stars, causing them to appear further towards the top of ratings lists by as many as two stars.</p>
<p>The new policy takes away the vendor freebies, except in the case of books in the Amazon Vine program where reviewers trusted by Amazon are still able to receive advance copies.  According to an Amazon spokesman, reviews that were received prior to the policy change are only being retroactively removed if they are excessive, and don’t comply with prior policy.  That leaves a huge number of reviews with false ratings that will continue to leave buyers leery of their validity.</p>
<p>Amazon also needs to deal with the verified purchase program, where a reviewer who purchases a reviewed item from Amazon receives a “verified purchase” icon on the review.  The program causes readers of reviews without the icon (meaning that the item was purchased at another site or store) to believe that the review is less trustworthy.  If Amazon reviews are to be as unbiased as possible, this program also needs to be revamped.</p>
<p><strong> </strong></p>
<p>The post <a rel="nofollow" href="https://www.onlinemoderation.com/amazon-bans-giving-free-books-reviewers/">Amazon Bans Giving Free Books to Reviewers</a> appeared first on <a rel="nofollow" href="https://www.onlinemoderation.com">Online Moderation</a>.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
