<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Mzinga Moderators &#8211; Online Moderation</title>
	<atom:link href="https://www.onlinemoderation.com/author/mzingamoderators/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.onlinemoderation.com</link>
	<description>Social Media Management Services &#38; Content Moderation That Flex With Your Needs</description>
	<lastBuildDate>Wed, 19 Jun 2019 18:19:46 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	
	<item>
		<title>Online Moderators Keep it Civil, But What About Where They Work?</title>
		<link>https://www.onlinemoderation.com/online-moderators-keep-civil-work/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=online-moderators-keep-civil-work</link>
		
		<dc:creator><![CDATA[Mzinga Moderators]]></dc:creator>
		<pubDate>Mon, 13 Mar 2017 14:21:22 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Content Moderation]]></category>
		<category><![CDATA[online moderator]]></category>
		<category><![CDATA[social media]]></category>
		<guid isPermaLink="false">http://onlinemoderation.com/?p=1172</guid>

					<description><![CDATA[<p>Online Moderators Keep it Civil, But What About Where They Work? Mzinga moderators spend much of their shifts putting an end to flame wars, banning trolls, handling customer complaints, and keeping the peace.  As Mzinga’s Director of Moderation Services, I ensure that the team works in an environment that encourages and practices civil interaction as [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://www.onlinemoderation.com/online-moderators-keep-civil-work/">Online Moderators Keep it Civil, But What About Where They Work?</a> appeared first on <a rel="nofollow" href="https://www.onlinemoderation.com">Online Moderation</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Online Moderators Keep it Civil, But What About Where They Work?</p>
<p>Mzinga moderators spend much of their shifts putting an end to flame wars, banning trolls, handling customer complaints, and keeping the peace.  As Mzinga’s Director of Moderation Services, I ensure that the team works in an environment that encourages and practices civil interaction as well.</p>
<p>To produce harmonious workplace conditions, the consulting firm <a href="http://civilitypartners.com/" target="_blank" rel="noopener noreferrer">Civility Partners</a> has established the following guidelines for teams, whether they work together in an office or virtually.  Teams should especially avoid:</p>
<p>&#8212;  Aggressive Communication (includes insults or offensive remarks, angry outbursts, avoidance,  offensive written communications, and blaming someone for issues not their fault or they have no control over)</p>
<p>&#8212; Humiliation (includes ridiculing or teasing, spreading gossip, taunting (in person or writing), publicly pointing out mistakes or mistakes that have been corrected, and snubbing for having a different interpretation of a company policy or management style)</p>
<p>&#8212; Manipulation of Work (includes subverting tasks associated with a person&#8217;s job responsibilities, unmanageable workloads and impossible deadlines, making general statements about poor performance without offering assistance to correct it, and leaving a person out of the correspondence and meeting loop)</p>
<p>Behaviors that contribute to workplace civility are respect, support, encouragement, politeness, openness, appreciation, trust, sensitivity, sincerity, having a positive attitude, taking pride in what you do, and being a good example.</p>
<p>Each company should have an established Company and Management Commitment to Civility that ensures their workers are free from negative, aggressive, and inappropriate behaviors and that the workplace will provide an atmosphere of respect, collaboration, openness, safety, and equality; where complaints about negative workplace behaviors are taken seriously and followed-through to resolution.  Every employee, from the CEO to the intern, is given a copy and a signed copy is part of their employee file.  Larger companies will have a training module as part of new employee orientation.</p>
<p>Online moderators keep their client&#8217;s sites free from risks that run from bad publicity to legal liability.  As a result, the burnout rate is high (see my blog entry from a couple of weeks ago about two Microsoft moderators who say they are permanently disabled from moderating disturbing images).  At Mzinga, the moderation team is able promote civil interaction because it is practiced in their workplace.</p>
<p>The post <a rel="nofollow" href="https://www.onlinemoderation.com/online-moderators-keep-civil-work/">Online Moderators Keep it Civil, But What About Where They Work?</a> appeared first on <a rel="nofollow" href="https://www.onlinemoderation.com">Online Moderation</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Facebook Adds AI to Suicide Prevention Arsenal</title>
		<link>https://www.onlinemoderation.com/facebook-adds-ai-suicide-prevention-arsenal/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=facebook-adds-ai-suicide-prevention-arsenal</link>
		
		<dc:creator><![CDATA[Mzinga Moderators]]></dc:creator>
		<pubDate>Mon, 06 Mar 2017 14:30:31 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Facebook]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[Suicide Prevention]]></category>
		<guid isPermaLink="false">http://onlinemoderation.com/?p=1168</guid>

					<description><![CDATA[<p>Facebook Adds AI to Suicide Prevention Arsenal More than ten years ago, I complimented Facebook for encouraging its members to send in a report if they saw a post by a member or friend saying they were serious about harming themselves.  If a report was received, Facebook contacted the member with a message of concern, [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://www.onlinemoderation.com/facebook-adds-ai-suicide-prevention-arsenal/">Facebook Adds AI to Suicide Prevention Arsenal</a> appeared first on <a rel="nofollow" href="https://www.onlinemoderation.com">Online Moderation</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Facebook Adds AI to Suicide Prevention Arsenal</p>
<p>More than ten years ago, I complimented Facebook for encouraging its members to send in a report if they saw a post by a member or friend saying they were serious about harming themselves.  If a report was received, Facebook contacted the member with a message of concern, along with a list of resources for getting support.</p>
<p>Recently, Facebook <a href="http://newsroom.fb.com/news/2017/03/building-a-safer-community-with-new-suicide-prevention-tools/">announced</a> updated resource tools, as well as the use of artificial technology (AI), to offer a more rapid assist to those who may be contemplating suicide.  In addition to an improved support system that is now available on Facebook Live, text and videos are analyzed for content indicating that the member may be considering suicide.</p>
<p>If the software reads triggering words and phrases that indicate a member is at risk, the Facebook Community Operations team is notified.  They will send a message of support and suggest ways the member can seek help if they need it.</p>
<p>Have we seen the last suicide on Facebook Live?  Probably not, but instead of using AI to combat online trolling behavior (i.e. Google’s Perspective), saving lives is a much more real-world and effective use of the technology.</p>
<p>The post <a rel="nofollow" href="https://www.onlinemoderation.com/facebook-adds-ai-suicide-prevention-arsenal/">Facebook Adds AI to Suicide Prevention Arsenal</a> appeared first on <a rel="nofollow" href="https://www.onlinemoderation.com">Online Moderation</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Google’s Jigsaw Gets a New Perspective: Learning Curve Still a Hairpin</title>
		<link>https://www.onlinemoderation.com/googles-jigsaw-gets-new-perspective-learning-curve-still-hairpin/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=googles-jigsaw-gets-new-perspective-learning-curve-still-hairpin</link>
		
		<dc:creator><![CDATA[Mzinga Moderators]]></dc:creator>
		<pubDate>Mon, 27 Feb 2017 14:52:21 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Content Moderation]]></category>
		<category><![CDATA[social media]]></category>
		<guid isPermaLink="false">http://onlinemoderation.com/?p=1164</guid>

					<description><![CDATA[<p>Google’s Jigsaw Gets a New Perspective: Learning Curve Still a Hairpin Late last year, I commented on Google’s Jigsaw software, created to apply machine learning to detect and remove harassment and abusive content in areas where users interact online.  At the time, I said that no matter how much Jigsaw learned, it would never be [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://www.onlinemoderation.com/googles-jigsaw-gets-new-perspective-learning-curve-still-hairpin/">Google’s Jigsaw Gets a New Perspective: Learning Curve Still a Hairpin</a> appeared first on <a rel="nofollow" href="https://www.onlinemoderation.com">Online Moderation</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Google’s Jigsaw Gets a New Perspective: Learning Curve Still a Hairpin</p>
<p>Late last year, I commented on Google’s Jigsaw software, created to apply machine learning to detect and remove harassment and abusive content in areas where users interact online.  At the time, I said that no matter how much Jigsaw learned, it would never be smart enough to replace human moderators.</p>
<p>Recently, Google’s Counter Abuse Technology Team released Perspective, the newest Jigsaw tool.  It’s an API that allows users to tap Jigsaw’s library of millions of words and phrases and determine a message’s “toxicity.”  Perspective scans each message and produces a toxicity rating as a percent, based on what panels of users have thought of similar comments.  Each comment, for example, is rated as “8 percent similar to phrases people said were “toxic.””</p>
<p>While several sites, such as the New York Times, are giving it a try (<a href="http://www.perspectiveapi.com/" target="_blank" rel="noopener noreferrer">you can try it as well at the Perspective Demo Site</a>), there are many that believe that, while helpful, it will never be used more than as a first pass to flag content for subsequent human review.  Perspective advances the learning curve of using Artificial Intelligence to combat online trolling, but it also further illustrates the continuing value of human moderators.</p>
<p>The post <a rel="nofollow" href="https://www.onlinemoderation.com/googles-jigsaw-gets-new-perspective-learning-curve-still-hairpin/">Google’s Jigsaw Gets a New Perspective: Learning Curve Still a Hairpin</a> appeared first on <a rel="nofollow" href="https://www.onlinemoderation.com">Online Moderation</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Twitter Throws the Bozo Flag on Trolls</title>
		<link>https://www.onlinemoderation.com/twitter-throws-bozo-flag-trolls/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=twitter-throws-bozo-flag-trolls</link>
		
		<dc:creator><![CDATA[Mzinga Moderators]]></dc:creator>
		<pubDate>Tue, 21 Feb 2017 13:42:12 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Trolling]]></category>
		<category><![CDATA[trolls]]></category>
		<guid isPermaLink="false">http://onlinemoderation.com/?p=1161</guid>

					<description><![CDATA[<p>Twitter Throws the Bozo Flag on Trolls Twitter used to brag that it was “the free speech wing of the free speech party,” but recently the party was crashed by the company’s security team, who launched a protocol aimed to decrease users’ exposure to abusive content.  It’s both a step forward and a step back.  [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://www.onlinemoderation.com/twitter-throws-bozo-flag-trolls/">Twitter Throws the Bozo Flag on Trolls</a> appeared first on <a rel="nofollow" href="https://www.onlinemoderation.com">Online Moderation</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Twitter Throws the Bozo Flag on Trolls</p>
<p>Twitter used to brag that it was “the free speech wing of the free speech party,” but recently the party was crashed by the company’s security team, who launched a <a href="https://blog.twitter.com/2017/an-update-on-safety">protocol</a> aimed to decrease users’ exposure to abusive content.  It’s both a step forward and a step back.  Forward because it’s effective.  Back because it’s a tactic that’s been around since the 90s.</p>
<p>If a user begins tweeting abusive messages, triggering the protocol will cause them to be seen only by those following them.  If the follower retweets the message, it won’t be visible to those who follow that user.  In the 90s, making posted content only visible to the poster was called “throwing the Bozo flag” or being sent to “banned camp.”  Several of my larger clients still successfully use that tactic.</p>
<p>At Twitter, the <a href="https://twitter.com/PrezzzKalerYo/status/830119103507623936">restriction</a> is temporary, usually lasting 12 hours, and, unlike Bozo flags of the past, the affected user is notified with a “We’ve temporarily limited some of your account features” because of “potentially abusive behavior.”</p>
<p>While there is room for improvement (such as an appeals process for those who feel they weren’t breaking the rules), a sizable “free speech” backlash, and accusations that the company is making the changes to become a more lucrative to potential buyers, I applaud Twitter’s efforts to reduce trolling.  As Twitter&#8217;s vice president of engineering Ed Ho said, this is one step toward effectively reducing abuse and the campaign will continue until there is “a significant impact that people can feel.&#8221;  I’m feeling it.</p>
<p>The post <a rel="nofollow" href="https://www.onlinemoderation.com/twitter-throws-bozo-flag-trolls/">Twitter Throws the Bozo Flag on Trolls</a> appeared first on <a rel="nofollow" href="https://www.onlinemoderation.com">Online Moderation</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Texas Anti-Harassment Legislation Threatens Lawful Online Interaction</title>
		<link>https://www.onlinemoderation.com/texas-anti-harassment-legislation-threatens-lawful-online-interaction/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=texas-anti-harassment-legislation-threatens-lawful-online-interaction</link>
		
		<dc:creator><![CDATA[Mzinga Moderators]]></dc:creator>
		<pubDate>Thu, 16 Feb 2017 14:56:24 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Bullying]]></category>
		<category><![CDATA[Content Moderation]]></category>
		<guid isPermaLink="false">http://onlinemoderation.com/?p=1158</guid>

					<description><![CDATA[<p>Texas Anti-Harassment Legislation Threatens Lawful Online Interaction A new bill introduced in the Texas legislature seeks to criminalize cyber-bullying of children in educational settings.  The bill, called “David’s Law” (named after a 16-year-old victim of cyber-bullying who killed himself – there were no charges filed against those accused) would give school district officials more power [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://www.onlinemoderation.com/texas-anti-harassment-legislation-threatens-lawful-online-interaction/">Texas Anti-Harassment Legislation Threatens Lawful Online Interaction</a> appeared first on <a rel="nofollow" href="https://www.onlinemoderation.com">Online Moderation</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Texas Anti-Harassment Legislation Threatens Lawful Online Interaction</p>
<p>A new bill introduced in the Texas legislature seeks to criminalize cyber-bullying of children in educational settings.  <a href="http://www.capitol.state.tx.us/tlodocs/85R/billtext/pdf/SB00179I.pdf#navpanes=0">The bill</a>, called “David’s Law” (named after a 16-year-old victim of cyber-bullying who killed himself – there were no charges filed against those accused) would give school district officials more power to discipline, expel, and expose the identities of online harassment suspects.</p>
<p>The bill aims to protect students from communications that infringe upon their rights, but it does not define those rights or how they might be violated.   If a single email “infringes on the rights of the victim at school,” the sender could be disciplined.  If that email results in the recipient’s suicide, the sender could be expelled.</p>
<p>The worst provision, however, is the unmasking of the sender if they are accused of harassment.  The bill authorizes subpoenas to investigate injury claims before a lawsuit is filed.  As a result, if it is determined that no injury took place, the sender is still stamped with the stigma of a harasser or cyber-bully.</p>
<p>Bullying on social media is on the rise and a cause for concern, but anti-harassment policies must be very limited in scope (and rights and remedies narrowly defined) so they do not jeopardize the First Amendment rights of those who engage in lawful interaction.</p>
<p>&nbsp;</p>
<p>The post <a rel="nofollow" href="https://www.onlinemoderation.com/texas-anti-harassment-legislation-threatens-lawful-online-interaction/">Texas Anti-Harassment Legislation Threatens Lawful Online Interaction</a> appeared first on <a rel="nofollow" href="https://www.onlinemoderation.com">Online Moderation</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Lego Life App Combats Trolls and Bullies</title>
		<link>https://www.onlinemoderation.com/lego-life-app-combats-trolls-bullies/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=lego-life-app-combats-trolls-bullies</link>
		
		<dc:creator><![CDATA[Mzinga Moderators]]></dc:creator>
		<pubDate>Tue, 14 Feb 2017 14:47:19 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Bullying]]></category>
		<category><![CDATA[Content Moderation]]></category>
		<category><![CDATA[Trolling]]></category>
		<category><![CDATA[trolls]]></category>
		<guid isPermaLink="false">http://onlinemoderation.com/?p=1155</guid>

					<description><![CDATA[<p>A new app launched by Lego contains many features that minimize the ability of users to be harassed and bullied.  Called Lego Life, the app for iOS and Android (available in App Store and Google Play) allows kids under 13 to create profiles, watch videos, participate in challenges, upload photos of their projects, search and [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://www.onlinemoderation.com/lego-life-app-combats-trolls-bullies/">Lego Life App Combats Trolls and Bullies</a> appeared first on <a rel="nofollow" href="https://www.onlinemoderation.com">Online Moderation</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>A new app launched by Lego contains many features that minimize the ability of users to be harassed and bullied.  Called <a href="https://www.lego.com/en-us/life">Lego Life</a>, the app for iOS and Android (available in App Store and Google Play) allows kids under 13 to create profiles, watch videos, participate in challenges, upload photos of their projects, search and follow their favorites, and post in message boards.</p>
<p>Lego Life’s concerns for the safety of its users are evident in many ways: users under 13 must have their parents provide email permission, profiles are avatars that users create from a list of traits, usernames are randomly generated from a three-word sequence (i.e. ChairmanWilyDolphin), all user-generated content is premoderated, no photos are allowed that contain human faces, and most responses are either controlled by using emojis from a special keyboard or selected from a list of phrases.  Users are allowed to use their own words when responding to official Lego content.  A version for the web is in development.</p>
<p>Lego Life is a win-win for both the company and its customers.  The app increases brand loyalty and keeps kids safe. When they aren’t using it, they are presumably building new Lego creations.  That’s just fine with Lego Group’s senior director Rob Lowe, who says of kids who use the app, &#8220;One of its core purposes is to put their iPhone down and go do something else.”</p>
<p>The post <a rel="nofollow" href="https://www.onlinemoderation.com/lego-life-app-combats-trolls-bullies/">Lego Life App Combats Trolls and Bullies</a> appeared first on <a rel="nofollow" href="https://www.onlinemoderation.com">Online Moderation</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Microsoft Online Moderators Allege Viewing Explicit Images Gave Them PTSD</title>
		<link>https://www.onlinemoderation.com/microsoft-moderators-allege-viewing-explicit-images-gave-ptsd/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=microsoft-moderators-allege-viewing-explicit-images-gave-ptsd</link>
		
		<dc:creator><![CDATA[Mzinga Moderators]]></dc:creator>
		<pubDate>Mon, 30 Jan 2017 15:00:38 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[online moderator]]></category>
		<guid isPermaLink="false">http://onlinemoderation.com/?p=1149</guid>

					<description><![CDATA[<p>Two Microsoft online moderators have filed suit against the company, saying they were forced to watch child porn and other offensive and disturbing images and videos to the extent that they began to exhibit symptoms of Post-Traumatic Stress Disorder (PTSD).  When they asked for help, they say in a McClatchy DC report, they received negative [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://www.onlinemoderation.com/microsoft-moderators-allege-viewing-explicit-images-gave-ptsd/">Microsoft Online Moderators Allege Viewing Explicit Images Gave Them PTSD</a> appeared first on <a rel="nofollow" href="https://www.onlinemoderation.com">Online Moderation</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Two Microsoft online moderators have filed suit against the company, saying they were forced to watch child porn and other offensive and disturbing images and videos to the extent that they began to exhibit symptoms of Post-Traumatic Stress Disorder (PTSD).  When they asked for help, they say in a <a href="http://www.mcclatchydc.com/news/nation-world/national/article125953194.html">McClatchy DC report</a>, they received negative reviews of their performance.</p>
<p>The two, members of the Online Safety Team, allege they could not transfer to another division for 18 months, they were not adequately trained, were not provided with the level of counseling they needed, and were denied workman’s compensation for medical leave time they took in order to take a break and reduce their PTSD-related symptoms.  The compensation requests were denied because OSHA said their conditions were not an occupational disease.</p>
<p>When contacted by McClatchy, Microsoft responded to the allegations by saying, in part, “The health and safety of our employees who do this difficult work is a top priority. Microsoft works with the input of our employees, mental health professionals and the latest research on robust wellness and resilience programs to ensure those who handle this material have the resources and support they need, including an individual wellness plan.”  The company also pointed to programs such as the Compassion Fatigue Referral Project and its mandatory support sessions for members of the Online Safety Team and the Digital Crimes Unit.</p>
<p>As Microsoft says in its response to the charges, “This work is difficult, but critically important to a safer and more trusted internet. The health and safety of our employees who do this difficult work is a top priority…”  Mzinga moderators know they may encounter offensive content at any time, and they are trained in the proper responses.  Here are a few of our best practices:</p>
<p>&#8212; Our moderators are warned about and can handle the content they may see</p>
<p>&#8212; Our moderators are trained to rapidly escalate offensive content to the proper law enforcement authorities</p>
<p>&#8212; As soon as a text, picture, or video tips the scales, it is removed immediately without having to look at it any further</p>
<p>&#8212; If they feel overwhelmed, they can trade shifts with another moderator, consult with the lead moderator or the director about how to handle their feelings, or they can work on a different project.</p>
<p>Removing offensive content is part of keeping the Internet safe.  It is Mzinga’s goal to keep our moderators safe as well.</p>
<p>The post <a rel="nofollow" href="https://www.onlinemoderation.com/microsoft-moderators-allege-viewing-explicit-images-gave-ptsd/">Microsoft Online Moderators Allege Viewing Explicit Images Gave Them PTSD</a> appeared first on <a rel="nofollow" href="https://www.onlinemoderation.com">Online Moderation</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Papers in Your Desk Have More Protection than Those in Your Inbox</title>
		<link>https://www.onlinemoderation.com/papers-desk-protection-inbox/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=papers-desk-protection-inbox</link>
		
		<dc:creator><![CDATA[Mzinga Moderators]]></dc:creator>
		<pubDate>Mon, 23 Jan 2017 14:30:15 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[protection]]></category>
		<guid isPermaLink="false">http://onlinemoderation.com/?p=1145</guid>

					<description><![CDATA[<p>Papers in Your Desk Have More Protection than Those in Your Inbox In 1989, my master’s thesis, in part, argued that the Electronic Communications Privacy Act of 1986 should protect emails and other communications stored on a server in perpetuity, not just those stored for less than 180 days.  I also stressed that stored communications [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://www.onlinemoderation.com/papers-desk-protection-inbox/">Papers in Your Desk Have More Protection than Those in Your Inbox</a> appeared first on <a rel="nofollow" href="https://www.onlinemoderation.com">Online Moderation</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Papers in Your Desk Have More Protection than Those in Your Inbox</p>
<p>In 1989, my master’s thesis, in part, argued that the <a href="https://it.ojp.gov/PrivacyLiberty/authorities/statutes/1285">Electronic Communications Privacy Act of 1986</a> should protect emails and other communications stored on a server in perpetuity, not just those stored for less than 180 days.  I also stressed that stored communications should not be accessed by any law enforcement organizations unless a criminal warrant (and not just a subpoena) is secured.  Unfortunately, though there has been progress, the recommendations have not been adopted.</p>
<p>The recently-introduced <a href="http://docs.house.gov/billsthisweek/20160425/HR699.pdf">Email Privacy Act</a> goes far in bringing these standards into effect, especially since so much data is now stored on YouTube, Dropbox, and Facebook. The Email Privacy Act, which requires a probable cause warrant for all digital communications held on cloud servers no matter how old they are, was passed by the House of Representatives by an overwhelming <a href="https://www.eff.org/deeplinks/2016/04/house-advances-email-privacy-act-setting-stage-vital-privacy-reform">majority</a> last year, but stalled in the Senate and was eventually withdrawn when amendments to weaken it were introduced.</p>
<p>While some groups, such as the Electronic Frontier Foundation, would like to have the government secure a warrant for obtaining geolocation information, I believe that this provision would thwart law enforcement from stopping trolls who post threats using mobile devices.  Obtaining a subpoena is a sufficient safeguard.</p>
<p>In 2017, it is time to bring the privacy of electronic communications into the present, offering protection that is in line with fundamental and current needs.</p>
<p>&nbsp;</p>
<p>The post <a rel="nofollow" href="https://www.onlinemoderation.com/papers-desk-protection-inbox/">Papers in Your Desk Have More Protection than Those in Your Inbox</a> appeared first on <a rel="nofollow" href="https://www.onlinemoderation.com">Online Moderation</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>GitHub’s “Contributor Covenant” Makes Waves; Curbs Online Abuse</title>
		<link>https://www.onlinemoderation.com/githubs-contributor-covenant-makes-waves-curbs-online-abuse/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=githubs-contributor-covenant-makes-waves-curbs-online-abuse</link>
		
		<dc:creator><![CDATA[Mzinga Moderators]]></dc:creator>
		<pubDate>Tue, 17 Jan 2017 14:16:48 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Bullying]]></category>
		<category><![CDATA[Content Moderation]]></category>
		<category><![CDATA[Terms of Service]]></category>
		<guid isPermaLink="false">http://onlinemoderation.com/?p=1141</guid>

					<description><![CDATA[<p>GitHub’s “Contributor Covenant” Makes Waves; Curbs Online Abuse Over a year ago, I wrote about GitHub’s issues with bullying and discrimination, which came to a head when a female developer quit the collaborative coding hub – a victim of gender-based harassment by white male managers and co-workers.  The highly-publicized move eventually led to the resignation [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://www.onlinemoderation.com/githubs-contributor-covenant-makes-waves-curbs-online-abuse/">GitHub’s “Contributor Covenant” Makes Waves; Curbs Online Abuse</a> appeared first on <a rel="nofollow" href="https://www.onlinemoderation.com">Online Moderation</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>GitHub’s “Contributor Covenant” Makes Waves; Curbs Online Abuse</p>
<p>Over a year ago, I wrote about GitHub’s <a href="http://money.cnn.com/2014/03/17/technology/github-sexual-harassment/">issues with bullying and discrimination</a>, which came to a head when a female developer quit the collaborative coding hub – a victim of gender-based harassment by white male managers and co-workers.  The highly-publicized move eventually led to the resignation of GitHub’s CEO.</p>
<p>As a result, GitHub has made <a href="http://fusion.net/story/369325/how-to-stop-online-harassment/">several major changes</a>.  First, they hired Nicole Sanchez as the company’s VP of Social Impact.  Sanchez formalized GitHub’s organization (previously there had been no designated managers or job titles), made it easier for employee issues to be addressed, and announced that workplace diversity would be acknowledged and celebrated.</p>
<p>Sanchez also hired two transgender community managers: February Keeney as head of the Community and Safety team tasked with eliminating workplace harassment, and Coraline Ada Ehmke, a senior engineer and creator of the “Contributor Covenant,” a code of conduct that had been loosely adopted by several project teams.</p>
<p>Sanchez, Keeney, and Ehmke found, however, that their institutional changes weren’t welcome at all levels.  Several groups, advocates of free speech, resisted being told that the terms of the Contributor Covenant were being applied more widely.  They retaliated by using the software’s tagging feature to connect Ehmke with fake projects that had racist names, a tactic they had also used on the developer who had initially exposed the harassment in 2014.</p>
<p>One of the first tasks of the Community and Safety Team was to build “consent and intent” into the software.  Now, you cannot tag a coder on a project without their approval.  And late last year, they updated the Contributor Covenant to include guidelines for conduct that prohibited doxxing, bullying, and discrimination, as well as a wider range of moderation tools.</p>
<p>While there is still resistance from a few groups, Sanchez and her team have moved the company in the right direction: making the platform less prone to abuse, developing a policy that is explicit and embraces civility, diversity, and inclusion, and involving the community members in its enforcement.  As she said at a <a href="https://recompilermag.com/2016/08/26/open-source-feelings-real-world-examples-real-world-impact/">recent conference</a>, ““Diversity is coming to your party despite my bad experiences at other parties. Inclusion is being glad I came.”</p>
<p>The post <a rel="nofollow" href="https://www.onlinemoderation.com/githubs-contributor-covenant-makes-waves-curbs-online-abuse/">GitHub’s “Contributor Covenant” Makes Waves; Curbs Online Abuse</a> appeared first on <a rel="nofollow" href="https://www.onlinemoderation.com">Online Moderation</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Riot Games’ Tribunal System Reduces Abuse on Its Platform</title>
		<link>https://www.onlinemoderation.com/riot-games-tribunal-system-reduces-abuse-on-its-platform/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=riot-games-tribunal-system-reduces-abuse-on-its-platform</link>
		
		<dc:creator><![CDATA[Mzinga Moderators]]></dc:creator>
		<pubDate>Mon, 09 Jan 2017 15:33:11 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Terms of Service]]></category>
		<guid isPermaLink="false">http://onlinemoderation.com/?p=1137</guid>

					<description><![CDATA[<p>Riot Games’ Tribunal System Reduces Abuse on Its Platform Riot Games, producer of the PC-based multiplayer League of Legends (LoL), has been selected as Inc. magazine’s Company of the Year.  One of the reasons they are the most popular PC game in North America and Europe is due to their success at handling abuse, through [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://www.onlinemoderation.com/riot-games-tribunal-system-reduces-abuse-on-its-platform/">Riot Games’ Tribunal System Reduces Abuse on Its Platform</a> appeared first on <a rel="nofollow" href="https://www.onlinemoderation.com">Online Moderation</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Riot Games’ Tribunal System Reduces Abuse on Its Platform</p>
<p>Riot Games, producer of the PC-based multiplayer League of Legends (LoL), has been selected as Inc. magazine’s <a href="http://www.inc.com/magazine/201612/burt-helm-lindsay-blakely/company-of-the-year-riot-games.html">Company of the Year</a>.  One of the reasons they are the most popular PC game in North America and Europe is due to their success at handling abuse, through its <a href="http://na.leagueoflegends.com/legal/tribunal">Tribunal</a> system of rule enforcement, which not only penalizes toxic players but rewards those who have a pattern of positive behavior.</p>
<p>The Riot Games Tribunal is made up of players with exemplary conduct who review and vote on reported abuse of the “<a href="http://gameinfo.na.leagueoflegends.com/en/game-info/get-started/summoners-code/">Summoner’s Code</a>” of Conduct.  If the conduct warrants it, a case is opened.  Tribunal members then vote on the appropriate action against the user, either to “Punish” them, to “Pardon” them, or to “Skip” the case.  When twenty votes are received, the case is closed and the member receives a detailed report on the decision.  Punishment ranges from a warning, to a one-day ban, to permanent expulsion.</p>
<p>The most positive aspects of the Tribunal system are that a player is notified within hours of its decision, and in the report, they are told what infraction triggered the opening of a case.  As reported by Christine Porath in her book “<a href="https://www.amazon.com/Mastering-Civility-Manifesto-Christine-Porath/dp/1455568988">Mastering Civility</a>,” after 100 million votes, verbal abuse is 40% lower and 91.6% of reported members never receive another violation report.</p>
<p>The takeaway for community managers and moderators is to empower your community members: allow members to report the abuse they witness, allow an empowered team of members (The Tribunal) to quickly vote on the violation report and let the member know the outcome, and let the member what, if anything, they did wrong.</p>
<p>Of course, this system only works on platforms with an extremely large member database.  Smaller ones still need a team of experienced moderators to vet abuse reports and take action against toxic community members.</p>
<p>The post <a rel="nofollow" href="https://www.onlinemoderation.com/riot-games-tribunal-system-reduces-abuse-on-its-platform/">Riot Games’ Tribunal System Reduces Abuse on Its Platform</a> appeared first on <a rel="nofollow" href="https://www.onlinemoderation.com">Online Moderation</a>.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
