On January 1, 2017, the National Labor Relations Board (NLRB) Office of the General Counsel released an advice memorandum (dated September 22, 2016) reviewing the social media policy in Northwestern University’s revised Football Handbook.  The memorandum contains valuable guidance in an area full of uncertainty, as the NLRB has struck down seemingly common sense social media policies because of their potential to chill employees’ rights under Section 7 of the National Labor Relations Act (NLRA) to engage in “concerted protected activities.”  Section 8 of the NLRA prohibits employees from restraining employees from exercising their Section 7 rights.

According to the memorandum, Northwestern voluntarily revised its Football Handbook after receiving a charge alleging that the handbook violated the NLRA.  The advice memorandum reviewed the revised handbook for compliance with the NLRA.  Assuming for the purpose of the review that Northwestern’s football players are “employees” under the NLRA, the advice memorandum concluded that the revised social media policy passed muster.

The memorandum reprinted the original language of the policies along with the revisions in redline, as follows (deleted language in strikeout and new language in bold):

[W]e are concerned about… protecting the image and reputation of Northwestern University and its Department of Athletics and Recreation. . . .

Publicly posted information on social networking websites can be seen may be regularly monitored by any person with a smart phone or internet access, including individuals a number of sources within Northwestern University (e.g., Athletics Department, Student Affairs, University Police). . . .

Northwestern student-athletes should be very careful when using online social networking sites and keep in mind that sanctions may be imposed if these sites are used improperly or depict inappropriate, embarrassing harassing, unlawful or dangerous behaviors such as full or partial nudity (of yourself or another), sex, racial or sexual epithets, underage drinking, drugs, weapons or firearms, hazing, harassment, unlawful activity or any content that violates Northwestern University, Athletics Department or student-athlete codes of conduct and/or state or federal laws.

….

Do not post any information, photos or other items online that contain full or partial nudity (of yourself or another), sex, racial or sexual epithets, underage drinking, drugs, weapons or firearms, hazing, harassment or unlawful activity could embarrass you, your family, your team, the Athletics Department or Northwestern University.

Although the advice memorandum did not elaborate on why the original policy could violate the NLRA while revised policy would not, it provides important clues on drafting lawful social media policies.  The modifications to the policy generally substituted vague terms like “inappropriate” and “embarrassing” with descriptions of the content that the policy prohibits.  For example, the revised policy specifically prohibits social media posts depicting “nudity,” “racial or sexual epithets,” and “underage drinking,” among other things.  The revised policy also eliminated protection of the employer’s “image and reputation” from the description of the policy’s purpose.  In previous guidance, the NLRB has determined that employers may not require employees to refrain from engaging in activity that generally damages the employer’s reputation because that could be construed to prohibit “concerted protected activity” such as criticism of work conditions or compensation policies.

The recent advice memorandum reinforces the need to be precise when drafting a social media policy.  Experienced counsel can assist in identifying the types of social media content that the NLRB has allowed employers to prohibit employees from posting.

Say you’re the president of Diamond Staffing Services. One morning, your phone is flooded with Twitter notifications. A few taps leads you to the source of the buzz: Someone opened a Twitter account parodying your company’s name and tweeted: “Work for Diamond? Pregnant = fired. We’re Diamond – we don’t care, LOL!” The tweet links to your company’s official Twitter account. Livid, you instruct your attorney to file a defamation lawsuit. Not so fast, your attorney says. First, you need to know who you’re suing, and the Twitter account was probably opened using fake information. What do you do?

This scenario is becoming more common as disgruntled employees and customers take to social media sites to air their grievances. Such users often post anonymously, and they have a First Amendment right to do so. To discover the identity of anonymous users, one must overcome First Amendment protections for anonymous speech.

A recent case illustrates the challenges of suing for defamation based on anonymous online statements. In Music Group Macao Commercial Offshore Ltd v. Does, 2015 WL 75073 (N.D. Cal. Mar. 2, 2015), a Washington-based company (Music Group) alleged that the defendants used anonymous Twitter accounts to defame the company and its CEO. Among other things, the anonymous users tweeted that Music Group “designs its products to break in 3-6 months,” “encourages domestic violence and misogyny,” and that its CEO “engages with prostitutes.” Music Group originally subpoenaed Twitter in Washington to reveal “the name, address, email address and any proxy address” of owners of the accounts. Twitter, which is based in San Francisco, did not agree to have a court in Washington decide wither it had to comply with the subpoenas. Music Group then filed a miscellaneous proceeding in the district court in the Northern District of California to enforce the subpoenas.

The district court initially granted Music Group’s motion to enforce the subpoena, but after reviewing an amicus brief filed by Public Citizen, Inc. (a public interest law firm), the court corrected its order and denied the motion. The court first took stock of the various tests used by courts in analyzing First Amendment protection of anonymous online speech. The court chose to apply a test that focuses on the nature of the speech. Under that test, a party seeking to discovery the identity of an anonymous speaker must first persuade the court that there is a “real evidentiary basis” for believing that the defendant has engaged in wrongful conduct that has caused real harm to the plaintiff’s interests. If the plaintiff makes this showing, then the court must weigh the harm to the plaintiff caused by allowing the speaker to remain anonymous versus the harm to the speaker’s interests in anonymity.

The court ruled that the tweet stating that Music Group “designs its products to break in 3-6 months” was legitimate commercial criticism, which is protected by the First Amendment. The tweet directed at Music Group’s CEO personally could not support a defamation claim brought by Music Group. The tweet alleging that Music Group “encourages domestic violence and misogyny” could be defamatory, the court noted, but there was more to it than just the words. The tweet linked to a video commercial promoting an audio mixer sold by Music Group. The commercial shows a man using the audio mixer to rebuff a woman’s demands that he stop working and come with her to a social function. The video was comedic in nature. Understood in context, the tweet was “joking and ironic” and did not “fall outside the First Amendment for being in poor taste,” the court wrote. The court ultimately decided that the balance of harms did not justify enforcing the subpoenas.

Music Group highlights some of the questions one should ask before launching into a lawsuit against an anonymous online poster:

  1. Do I have legitimate claims? You’ll need some evidence to support your claims to overcome the speaker’s First Amendment right to anonymity.
  1. Where do I find the identifying information? Typically, you’ll need to ask the owner of the website where the offending comments were posted. Sometimes that’s not enough because the user might have set up the account using a fake name and email address. In that case, you need to get other identifying information like the IP address of the user, determine the Internet Service Provider (ISP) associated with that IP address, and ask the ISP to disclose the user’s account information.
  1. How do I get the identifying information? A subpoena is typically the tool of choice. The rules governing subpoenas can be highly technical, so consulting an attorney is advisable. For example, in Music Group, Twitter, which is based in San Francisco, refused to comply with an order enforcing the subpoena issued by a Washington court. The plaintiffs in the case had to open a special proceeding in California to enforce the subpoena.

Working through these questions will help you determine if it’s worth suing an anonymous online speaker.

Employees can get carried away on social media. US Airways learned this the hard way when its employee responded to a customer complaint on Twitter with an obscene picture of a woman and a toy jet. An apology and deletion of the tweet followed an hour later (an eternity in cyberspace). US Airways claims its employee made an “honest mistake,” and the incident has not spawned a lawsuit, but one can imagine situations in which the malicious online statements of an employee land the employer in legal trouble.

So what’s an employer to do? Thankfully, employers can find some solace in Section 230 of the federal Communications Decency Act (“CDA”), as a recent Indiana case illustrates. In Miller v. Federal Express Corp., an employee of a non-profit organization, 500 Festival, Inc. (“500 Festival”), and an employee of FedEx separately posted comments on media websites criticizing the plaintiff’s leadership of Junior Achievement of Central Indiana, which he ran from 1994 to 2008. Although the employees posted the comments using aliases, the plaintiff traced the comments back to IP addresses assigned to 500 Festival and FedEx and sued them for defamation.

The Indiana Court of Appeals affirmed the trial court’s dismissal of the defamation claims against 500 Festival and FedEx based on the Section 230 of the CDA. Congress passed Section 230 to protect companies that serve as intermediaries for online speech from liability for harmful content posted by third parties. A defendant claiming Section 230 immunity must show that: (1) it is a provider or user of an interactive computer service; (2) the plaintiff’s claim treats it as the publisher or speaker of information; and (3) another information at issue was provided by another content provider. Satisfying these three elements immunizes the defendant from suit, although the author of the offensive content could still be held liable.

It’s not difficult to see how Section 230 applies where, for instance, the operator of an online discussion forum is sued for defamation based on a comment posted by a forum member. The operator easily qualifies as an “interactive computer service” and can argue it is not liable for content that someone else published. But could a corporate employer qualify for Section 230 immunity? The court in Miller said yes, siding with precedent set by California and Illinois courts. An employer that provides or enables multiple users on a computer network with Internet access qualifies as a provider of an interactive computer service. Since the defamation claims tried to hold 500 Festival and FedEx liable for allegedly publishing statements made by their employees, Section 230 barred the claims.

Controlling what employees say online can be a daunting task, but it’s nice to know that employers have some protection from legal liability for the “honest” (or not so honest) mistakes of employees.

Enhanced by Zemanta

FC 250 Grand Marshal, Paula Deen

FC 250 Grand Marshal, Paula Deen (Photo credit: Bristol Motor Speedway & Dragway)

Lisa Jackson’s discrimination and sexual harassment lawsuit against Paula Deen settled last Friday, but not before Deen tried to remove Jackson’s attorney, for publicly disparaging her on social media.  A court order filed hours before the settlement reveals that in March, Deen’s lawyers filed a motion for sanctions against Matthew C. Billips, the lawyer who represented Jackson (read the motion here).  The motion alleges that Billips made offensive remarks about Deen on Twitter.  Some of the more eyebrow-raising tweets included:

“I’ve been doing Paula Deen, in a strongly metaphorical sense”

“I plan on undressing [Deen]” (in reference to an upcoming deposition of Deen)

“Now talk about fun, suing Paula Deen is a hoot!”

In another Twitter conversation about Deen’s diabetes, Billips allegedly referred to Deen’s food with the hashtag #buttercoatedbuttercookies.

Based on Billips’ tweets and his discovery practices, Deen’s lawyers asked the court to disqualify him from continuing to represent Jackson.  As the August 23 court order shows, the judge declined to disqualify Billips, but it was open to imposing some form of sanctions against him.  The judge has indicated that the settlement will not stop the court from sanctioning Billips despite Deen’s lawyers attempt to withdraw their sanctions motion in light of the settlement.  Billips has 20 days as of Friday to show why he should not be sanctioned.

This cautionary tale that teaches litigants (and their attorneys) not to discuss pending cases on social media.  Posts on social networks like Facebook and Twitter can be publicly accessible, are potentially discoverable, and can be the basis for a defamation lawsuit.  There’s little to be gained and much to lose by talking about a lawsuit online.  For that reason, lawyers now commonly instruct their clients in their retainer agreements not to discuss the case with anyone on social media, even family and friends.  Lawyers would do well to follow their own advice.

Enhanced by Zemanta

Google acted as a “publisher” for CDA purposes for including third-party content in search resultsMmubango v. Google, Inc., 2013 WL 664231 (E.D. Pa. Feb. 22, 2013)

Google successfully obtained dismissal of a defamation lawsuit filed by a person (Mmubango) who found derogatory comments about him posted online.  Mmubango discovered anonymous statements about himself on the “Wikiscams” website.  Mmubango asked Google to remove the statements from its search engine and to give him information about the poster of the comments.  Google refused.

Mmubango sued Google and others for defamation, and Google defended by moving to dismiss the claim based on Communications Decency Act (CDA) immunity.  The federal district court for the Eastern District of Pennsylvania agreed that Google met the requirements for CDA immunity.  First, Google is an interactive computer service provider.  Second, Google did not author the allegedly defamatory content, but instead, was provided with it by another information content provider (i.e., Wikiscams).  The defamation claim alleged that  Google was liable for storing and broadcasting the derogatory comments about Mmubango.  Third, Mmubango was seeking to treat Google as the publisher of third-party statements.  Deciding whether to provide access to third-party content or, alternatively, to delete the content is an act of publishing.  Under section 230 of the CDA, Google could not be held liable for defamation based on its decision to publish a third party’s statements.  The court dismissed Google from the case.