Say you’re the president of Diamond Staffing Services. One morning, your phone is flooded with Twitter notifications. A few taps leads you to the source of the buzz: Someone opened a Twitter account parodying your company’s name and tweeted: “Work for Diamond? Pregnant = fired. We’re Diamond – we don’t care, LOL!” The tweet links to your company’s official Twitter account. Livid, you instruct your attorney to file a defamation lawsuit. Not so fast, your attorney says. First, you need to know who you’re suing, and the Twitter account was probably opened using fake information. What do you do?
This scenario is becoming more common as disgruntled employees and customers take to social media sites to air their grievances. Such users often post anonymously, and they have a First Amendment right to do so. To discover the identity of anonymous users, one must overcome First Amendment protections for anonymous speech.
A recent case illustrates the challenges of suing for defamation based on anonymous online statements. In Music Group Macao Commercial Offshore Ltd v. Does, 2015 WL 75073 (N.D. Cal. Mar. 2, 2015), a Washington-based company (Music Group) alleged that the defendants used anonymous Twitter accounts to defame the company and its CEO. Among other things, the anonymous users tweeted that Music Group “designs its products to break in 3-6 months,” “encourages domestic violence and misogyny,” and that its CEO “engages with prostitutes.” Music Group originally subpoenaed Twitter in Washington to reveal “the name, address, email address and any proxy address” of owners of the accounts. Twitter, which is based in San Francisco, did not agree to have a court in Washington decide wither it had to comply with the subpoenas. Music Group then filed a miscellaneous proceeding in the district court in the Northern District of California to enforce the subpoenas.
The district court initially granted Music Group’s motion to enforce the subpoena, but after reviewing an amicus brief filed by Public Citizen, Inc. (a public interest law firm), the court corrected its order and denied the motion. The court first took stock of the various tests used by courts in analyzing First Amendment protection of anonymous online speech. The court chose to apply a test that focuses on the nature of the speech. Under that test, a party seeking to discovery the identity of an anonymous speaker must first persuade the court that there is a “real evidentiary basis” for believing that the defendant has engaged in wrongful conduct that has caused real harm to the plaintiff’s interests. If the plaintiff makes this showing, then the court must weigh the harm to the plaintiff caused by allowing the speaker to remain anonymous versus the harm to the speaker’s interests in anonymity.
The court ruled that the tweet stating that Music Group “designs its products to break in 3-6 months” was legitimate commercial criticism, which is protected by the First Amendment. The tweet directed at Music Group’s CEO personally could not support a defamation claim brought by Music Group. The tweet alleging that Music Group “encourages domestic violence and misogyny” could be defamatory, the court noted, but there was more to it than just the words. The tweet linked to a video commercial promoting an audio mixer sold by Music Group. The commercial shows a man using the audio mixer to rebuff a woman’s demands that he stop working and come with her to a social function. The video was comedic in nature. Understood in context, the tweet was “joking and ironic” and did not “fall outside the First Amendment for being in poor taste,” the court wrote. The court ultimately decided that the balance of harms did not justify enforcing the subpoenas.
Music Group highlights some of the questions one should ask before launching into a lawsuit against an anonymous online poster:
- Do I have legitimate claims? You’ll need some evidence to support your claims to overcome the speaker’s First Amendment right to anonymity.
- Where do I find the identifying information? Typically, you’ll need to ask the owner of the website where the offending comments were posted. Sometimes that’s not enough because the user might have set up the account using a fake name and email address. In that case, you need to get other identifying information like the IP address of the user, determine the Internet Service Provider (ISP) associated with that IP address, and ask the ISP to disclose the user’s account information.
- How do I get the identifying information? A subpoena is typically the tool of choice. The rules governing subpoenas can be highly technical, so consulting an attorney is advisable. For example, in Music Group, Twitter, which is based in San Francisco, refused to comply with an order enforcing the subpoena issued by a Washington court. The plaintiffs in the case had to open a special proceeding in California to enforce the subpoena.
Working through these questions will help you determine if it’s worth suing an anonymous online speaker.
Employees can get carried away on social media. US Airways learned this the hard way when its employee responded to a customer complaint on Twitter with an obscene picture of a woman and a toy jet. An apology and deletion of the tweet followed an hour later (an eternity in cyberspace). US Airways claims its employee made an “honest mistake,” and the incident has not spawned a lawsuit, but one can imagine situations in which the malicious online statements of an employee land the employer in legal trouble.
So what’s an employer to do? Thankfully, employers can find some solace in Section 230 of the federal Communications Decency Act (“CDA”), as a recent Indiana case illustrates. In Miller v. Federal Express Corp., an employee of a non-profit organization, 500 Festival, Inc. (“500 Festival”), and an employee of FedEx separately posted comments on media websites criticizing the plaintiff’s leadership of Junior Achievement of Central Indiana, which he ran from 1994 to 2008. Although the employees posted the comments using aliases, the plaintiff traced the comments back to IP addresses assigned to 500 Festival and FedEx and sued them for defamation.
The Indiana Court of Appeals affirmed the trial court’s dismissal of the defamation claims against 500 Festival and FedEx based on the Section 230 of the CDA. Congress passed Section 230 to protect companies that serve as intermediaries for online speech from liability for harmful content posted by third parties. A defendant claiming Section 230 immunity must show that: (1) it is a provider or user of an interactive computer service; (2) the plaintiff’s claim treats it as the publisher or speaker of information; and (3) another information at issue was provided by another content provider. Satisfying these three elements immunizes the defendant from suit, although the author of the offensive content could still be held liable.
It’s not difficult to see how Section 230 applies where, for instance, the operator of an online discussion forum is sued for defamation based on a comment posted by a forum member. The operator easily qualifies as an “interactive computer service” and can argue it is not liable for content that someone else published. But could a corporate employer qualify for Section 230 immunity? The court in Miller said yes, siding with precedent set by California and Illinois courts. An employer that provides or enables multiple users on a computer network with Internet access qualifies as a provider of an interactive computer service. Since the defamation claims tried to hold 500 Festival and FedEx liable for allegedly publishing statements made by their employees, Section 230 barred the claims.
Controlling what employees say online can be a daunting task, but it’s nice to know that employers have some protection from legal liability for the “honest” (or not so honest) mistakes of employees.
FC 250 Grand Marshal, Paula Deen (Photo credit: Bristol Motor Speedway & Dragway)
Lisa Jackson’s discrimination and sexual harassment lawsuit against Paula Deen settled last Friday, but not before Deen tried to remove Jackson’s attorney, for publicly disparaging her on social media. A court order filed hours before the settlement reveals that in March, Deen’s lawyers filed a motion for sanctions against Matthew C. Billips, the lawyer who represented Jackson (read the motion here). The motion alleges that Billips made offensive remarks about Deen on Twitter. Some of the more eyebrow-raising tweets included:
“I’ve been doing Paula Deen, in a strongly metaphorical sense”
“I plan on undressing [Deen]” (in reference to an upcoming deposition of Deen)
“Now talk about fun, suing Paula Deen is a hoot!”
In another Twitter conversation about Deen’s diabetes, Billips allegedly referred to Deen’s food with the hashtag #buttercoatedbuttercookies.
Based on Billips’ tweets and his discovery practices, Deen’s lawyers asked the court to disqualify him from continuing to represent Jackson. As the August 23 court order shows, the judge declined to disqualify Billips, but it was open to imposing some form of sanctions against him. The judge has indicated that the settlement will not stop the court from sanctioning Billips despite Deen’s lawyers attempt to withdraw their sanctions motion in light of the settlement. Billips has 20 days as of Friday to show why he should not be sanctioned.
This cautionary tale that teaches litigants (and their attorneys) not to discuss pending cases on social media. Posts on social networks like Facebook and Twitter can be publicly accessible, are potentially discoverable, and can be the basis for a defamation lawsuit. There’s little to be gained and much to lose by talking about a lawsuit online. For that reason, lawyers now commonly instruct their clients in their retainer agreements not to discuss the case with anyone on social media, even family and friends. Lawyers would do well to follow their own advice.
Google acted as a “publisher” for CDA purposes for including third-party content in search results — Mmubango v. Google, Inc., 2013 WL 664231 (E.D. Pa. Feb. 22, 2013)
Google successfully obtained dismissal of a defamation lawsuit filed by a person (Mmubango) who found derogatory comments about him posted online. Mmubango discovered anonymous statements about himself on the “Wikiscams” website. Mmubango asked Google to remove the statements from its search engine and to give him information about the poster of the comments. Google refused.
Mmubango sued Google and others for defamation, and Google defended by moving to dismiss the claim based on Communications Decency Act (CDA) immunity. The federal district court for the Eastern District of Pennsylvania agreed that Google met the requirements for CDA immunity. First, Google is an interactive computer service provider. Second, Google did not author the allegedly defamatory content, but instead, was provided with it by another information content provider (i.e., Wikiscams). The defamation claim alleged that Google was liable for storing and broadcasting the derogatory comments about Mmubango. Third, Mmubango was seeking to treat Google as the publisher of third-party statements. Deciding whether to provide access to third-party content or, alternatively, to delete the content is an act of publishing. Under section 230 of the CDA, Google could not be held liable for defamation based on its decision to publish a third party’s statements. The court dismissed Google from the case.
Reliance on Communications Decency Act immunity does not convert a lawsuit into a federal case — Sulla v. Horowitz, Civ. No. 12-00449 (D. Haw. Oct. 4, 2012)
“You wanna make a federal case out of it?” Well, maybe you do, but as the federal district court of Hawaii recently explained, citing a federal statute and arguing that the Internet is involved won’t cut it.
The plaintiff (Sulla) was counsel to a party that foreclosed on property owned by the nonprofit corporation that one of the defendants (Horowitz) owned. Sulla alleged that Horowitz and his co-defendant began publishing defamatory statements about Sulla over the Internet, including through a website with a domain name bearing his name. Sulla sued Horowitz for defamation in state court, but the defendants removed the case to federal court, apparently based on diversity jurisdiction (i.e., all the plaintiffs are citizens of a different state than all the defendants). Noting that all the parties might be citizens of Hawaii, the federal court issued an order to show cause why the action should not be sent back to state court. Defendants’ briefing on the jurisdictional issue did not allay the court’s concerns.
The defendants argued that the court actually had federal question jurisdiction. The basis for their argument? Because the allegedly defamatory statements were published on the Internet, defendants argued, the court has exclusive federal jurisdiction over the case based on the Communications Decency Act (CDA). The court disagreed, giving a quick lesson on how the CDA and federal question jurisdiction work.
First, “Section 230 [of the CDA] does not shield persons from liability for defamatory statements that they make via the internet.” Second, whether CDA immunity applies is irrelevant to the analysis of federal jurisdiction. The court cited the basic rule that federal question jurisdiction cannot arise out of a defense (as compared to a claim) based in federal law. CDA immunity is a defense, so the possibility that the CDA might protect the defendants from liability for defamation did not convert the lawsuit into a “federal case.”
The court also found the defendants’ other arguments for federal jurisdiction unpersuasive and sent the case back to state court.