Have you ever been tempted to delete a social media message you posted that exposes you or your company to liability? That post that seemed like a harmless joke but now could turn into evidence in a wrongful termination lawsuit. Or that photo that could cast you in an unflattering light. If it ever crossed your mind that no one will notice if you simply pressed the “delete” button, here’s a case illustrating why succumbing to the temptation doesn’t end well.
In Crowe v. Marquette Transportation Company, Gulf-Inland, LLC, 2015 WL 254633 (E.D. La. Jan. 20, 2015), Brannon Crowe sued his employer, Marquette, for injuries he sustained due to an accident that allegedly occurred at work. Marquette discovered a Facebook message Crowe had allegedly sent to a co-worker in which he admitted injuring himself while fishing. This prompted Marquette’s lawyers to serve Crowe with a discovery request for a complete copy of Crowe’s Facebook history.
Crowe’s response to the request was that he didn’t “presently” have a Facebook account. When confronted in his deposition with a printout of a Facebook message that appeared to have been sent from an account with the username “Brannon CroWe,” Crowe claimed that he stopped having a Facebook account around October 2014, and that his account had been hacked. To substantiate his hacking claim, Crowe pointed out rather unconvincingly that, unlike the username on the printout, there’s no capital “W” in his name.
Crowe wasn’t entirely forthcoming. Although Crowe was technically correct that he didn’t have an active Facebook account when he responded to the request in December 2014, the truth was that Crowe deactivated his Facebook account four days after receiving the discovery request in October 2014. To make things worse for Crowe, data in a deactivated Facebook account isn’t deleted. A deactivated Facebook account can be reactivated at any time. Needless to say, the court was displeased with Crowe’s attempts to evade discovery. The court ordered Crowe to provide Marquette with his entire Facebook account history and the login information for all his Facebook accounts.
Although Crowe involved an employee who tried to hide unhelpful social media information, the lessons from the case apply equally to employers. Deactivating a social media account doesn’t necessarily shield information in the account from discovery because the information is probably still available. Deleting a social media account also doesn’t always mean the information in the account is gone forever. It’s not unusual for social media providers to store deleted user data in its servers before permanently deleting the information. And even if social media information is truly deleted, that in itself can be problematic. A person (or company) has a duty to preserve evidence that’s relevant to reasonably anticipated litigation. Violating the duty to preserve can lead to unpleasant consequences, including court sanctions.
Learn from Crowe’s example. The next time you’re tempted to dispose of an incriminating Facebook post, deactivate the temptation, not your Facebook account.
Say you’re the president of Diamond Staffing Services. One morning, your phone is flooded with Twitter notifications. A few taps leads you to the source of the buzz: Someone opened a Twitter account parodying your company’s name and tweeted: “Work for Diamond? Pregnant = fired. We’re Diamond – we don’t care, LOL!” The tweet links to your company’s official Twitter account. Livid, you instruct your attorney to file a defamation lawsuit. Not so fast, your attorney says. First, you need to know who you’re suing, and the Twitter account was probably opened using fake information. What do you do?
This scenario is becoming more common as disgruntled employees and customers take to social media sites to air their grievances. Such users often post anonymously, and they have a First Amendment right to do so. To discover the identity of anonymous users, one must overcome First Amendment protections for anonymous speech.
A recent case illustrates the challenges of suing for defamation based on anonymous online statements. In Music Group Macao Commercial Offshore Ltd v. Does, 2015 WL 75073 (N.D. Cal. Mar. 2, 2015), a Washington-based company (Music Group) alleged that the defendants used anonymous Twitter accounts to defame the company and its CEO. Among other things, the anonymous users tweeted that Music Group “designs its products to break in 3-6 months,” “encourages domestic violence and misogyny,” and that its CEO “engages with prostitutes.” Music Group originally subpoenaed Twitter in Washington to reveal “the name, address, email address and any proxy address” of owners of the accounts. Twitter, which is based in San Francisco, did not agree to have a court in Washington decide wither it had to comply with the subpoenas. Music Group then filed a miscellaneous proceeding in the district court in the Northern District of California to enforce the subpoenas.
The district court initially granted Music Group’s motion to enforce the subpoena, but after reviewing an amicus brief filed by Public Citizen, Inc. (a public interest law firm), the court corrected its order and denied the motion. The court first took stock of the various tests used by courts in analyzing First Amendment protection of anonymous online speech. The court chose to apply a test that focuses on the nature of the speech. Under that test, a party seeking to discovery the identity of an anonymous speaker must first persuade the court that there is a “real evidentiary basis” for believing that the defendant has engaged in wrongful conduct that has caused real harm to the plaintiff’s interests. If the plaintiff makes this showing, then the court must weigh the harm to the plaintiff caused by allowing the speaker to remain anonymous versus the harm to the speaker’s interests in anonymity.
The court ruled that the tweet stating that Music Group “designs its products to break in 3-6 months” was legitimate commercial criticism, which is protected by the First Amendment. The tweet directed at Music Group’s CEO personally could not support a defamation claim brought by Music Group. The tweet alleging that Music Group “encourages domestic violence and misogyny” could be defamatory, the court noted, but there was more to it than just the words. The tweet linked to a video commercial promoting an audio mixer sold by Music Group. The commercial shows a man using the audio mixer to rebuff a woman’s demands that he stop working and come with her to a social function. The video was comedic in nature. Understood in context, the tweet was “joking and ironic” and did not “fall outside the First Amendment for being in poor taste,” the court wrote. The court ultimately decided that the balance of harms did not justify enforcing the subpoenas.
Music Group highlights some of the questions one should ask before launching into a lawsuit against an anonymous online poster:
- Do I have legitimate claims? You’ll need some evidence to support your claims to overcome the speaker’s First Amendment right to anonymity.
- Where do I find the identifying information? Typically, you’ll need to ask the owner of the website where the offending comments were posted. Sometimes that’s not enough because the user might have set up the account using a fake name and email address. In that case, you need to get other identifying information like the IP address of the user, determine the Internet Service Provider (ISP) associated with that IP address, and ask the ISP to disclose the user’s account information.
- How do I get the identifying information? A subpoena is typically the tool of choice. The rules governing subpoenas can be highly technical, so consulting an attorney is advisable. For example, in Music Group, Twitter, which is based in San Francisco, refused to comply with an order enforcing the subpoena issued by a Washington court. The plaintiffs in the case had to open a special proceeding in California to enforce the subpoena.
Working through these questions will help you determine if it’s worth suing an anonymous online speaker.
In the last few years, we’ve seen how the private social media activity of employees can get employers in trouble for violating a variety of laws. The National Labor Relations Act. HIPAA. Title VII. Now you can add the Americans With Disabilities Act (ADA) to the list.
In Shoun v. Best Formed Plastics, Inc., 2014 WL 2815483 (N.D. Ind. June 23, 2014), a federal judge held that an employer may be liable under the ADA for an employee’s Facebook comments about the medical condition of a co-worker. George Shoun, an employee at Best Formed Plastics, sustained a workplace injury and took leave to recover. Shoun’s co-worker, Jane Stewart, learned about his injury because she processed his worker’s compensation claim and monitored his medical treatment for the company. Stewart posted this snarky message on her personal Facebook account: “Isn’t [it] amazing how Jimmy experienced a 5 way heart bypass just one month ago and is back to work, especially when you consider George Shoun’s shoulder injury kept him away from work for 11 months and now he is trying to sue us.”
Shoun sued the company, alleging that Stewart’s post made it liable for violating the ADA. According to Shoun, the post was visible to the business community. Shoun claimed that prospective employers refused to hire him because of the post, causing him emotional distress and mental pain and suffering.
The court refused to dismiss the ADA claim against the company, reasoning that Stewart obtained the information through an employment-related medical inquiry and then wrongfully disclosed it. As a result, Shoun could sue for violation of Section 102 of the ADA, which provides that any information relating to a medical condition of an employee obtained by an employer during “voluntary medical examinations, including voluntary work histories, which are part of an employee health program available to employees at that work site,” must be “collected and maintained on separate forms and in separate medical files and [be] treated as a confidential medical record.” Moreover, the company could be liable for Stewart’s actions even though she posted the message on her private Facebook account in her own time.
Shoun is another reminder of how easily the lines between personal and professional conduct can get blurred on social media. Employers must train their employees about what they may and may not disclose on social media. It is almost never proper for an employee to share medical information obtained at work on his or her personal social media account. The confidential nature of medical information needs to be emphasized especially when training employees who handle workers’ compensation claims, medical leave requests, billing for health services, FMLA claims, etc.
Employees can get carried away on social media. US Airways learned this the hard way when its employee responded to a customer complaint on Twitter with an obscene picture of a woman and a toy jet. An apology and deletion of the tweet followed an hour later (an eternity in cyberspace). US Airways claims its employee made an “honest mistake,” and the incident has not spawned a lawsuit, but one can imagine situations in which the malicious online statements of an employee land the employer in legal trouble.
So what’s an employer to do? Thankfully, employers can find some solace in Section 230 of the federal Communications Decency Act (“CDA”), as a recent Indiana case illustrates. In Miller v. Federal Express Corp., an employee of a non-profit organization, 500 Festival, Inc. (“500 Festival”), and an employee of FedEx separately posted comments on media websites criticizing the plaintiff’s leadership of Junior Achievement of Central Indiana, which he ran from 1994 to 2008. Although the employees posted the comments using aliases, the plaintiff traced the comments back to IP addresses assigned to 500 Festival and FedEx and sued them for defamation.
The Indiana Court of Appeals affirmed the trial court’s dismissal of the defamation claims against 500 Festival and FedEx based on the Section 230 of the CDA. Congress passed Section 230 to protect companies that serve as intermediaries for online speech from liability for harmful content posted by third parties. A defendant claiming Section 230 immunity must show that: (1) it is a provider or user of an interactive computer service; (2) the plaintiff’s claim treats it as the publisher or speaker of information; and (3) another information at issue was provided by another content provider. Satisfying these three elements immunizes the defendant from suit, although the author of the offensive content could still be held liable.
It’s not difficult to see how Section 230 applies where, for instance, the operator of an online discussion forum is sued for defamation based on a comment posted by a forum member. The operator easily qualifies as an “interactive computer service” and can argue it is not liable for content that someone else published. But could a corporate employer qualify for Section 230 immunity? The court in Miller said yes, siding with precedent set by California and Illinois courts. An employer that provides or enables multiple users on a computer network with Internet access qualifies as a provider of an interactive computer service. Since the defamation claims tried to hold 500 Festival and FedEx liable for allegedly publishing statements made by their employees, Section 230 barred the claims.
Controlling what employees say online can be a daunting task, but it’s nice to know that employers have some protection from legal liability for the “honest” (or not so honest) mistakes of employees.
“It’s my First Amendment right to say what I want!” The First Amendment is commonly invoked to justify personal expression. But did you know that the First Amendment applies only when the government is involved? For example, the First Amendment wouldn’t prevent a private company from firing an employee for making offensive comments about the governor. If the same employee worked for a government office, then the First Amendment might apply. As a lawsuit recently filed against the County of Maui illustrates, the First Amendment adds a layer of complexity for public employers dealing with controversial social media activity of its employees.
The First Amendment Lawsuit Against Maui County
Neldon Mamuad is a volunteer Liquor Commissioner for Maui County and part-time aide to a Maui County Council member. In July 2013, Mamuad started a Facebook fan page called “TAGUMAWatch,” named after a Maui police officer well-known for strict enforcement of parking and traffic violations. The page was intended to enable Facebook users to post about “Taguma sightings” and share their thoughts about him. TAGUMAWatch gained popularity quickly and evolved into a discussion forum on a variety of topics including news, traffic, and politics.
Mamuad claims that he didn’t publicize his involvement with TAGUMAWatch until a TV news story about the page named him as its creator. Mamuad also didn’t identify himself as a County employee when posting to the page or suggest that he spoke for the County.
The County somehow linked Mamuad to the page. Allegedly under pressure from the County, Mamuad changed the page’s name to MAUIWatch. A few days later, Officer Taguma submitted a complaint to the County alleging harassment via the page. After notifying Mamuad of the complaint and conducting an investigation, the County determined that Mamuad had engaged in harassment and cyber-bullying through social media and required him to enroll in an employee counseling program.
On March 3, 2014, Mamuad sued the County in federal court for violating his First Amendment rights. As of the time of this post, Mamuad’s motion for a TRO was pending.
When Does Employee Discipline Violate the First Amendment?
Most forms of internet expression qualify as “speech” under the First Amendment. That point has been driven home by recent legal developments, including a court decision that Facebook “likes” are protected by the First Amendment, a Ninth Circuit opinion recognizing that bloggers have the same First Amendment protections as traditional journalists, dismissal of an appeal from the termination of a public school teacher, and a federal lawsuit filed by a gun rights group alleging that the Honolulu Police Department censored comments on its Facebook page. Whenever the government is the one restricting speech, the First Amendment becomes relevant.
So how does a public employer know when it may discipline an employee for his or her social media conduct without violating the First Amendment? The general test in the Ninth Circuit, as spelled out in Mamuad’s TRO motion, looks at these factors:
- Did the employee speak on a matter of public concern?
- Did the employee speak as a private citizen or public employee?
- Was the employee’s protected speech a substantial or motivating factor in the adverse employment action?
- Did the government have an adequate justification for treating the employee differently from other members of the general public?
- Would the government have taken the adverse employment action even absent the protected speech?
Dahlia v. Rodriguez, 735 F.3d 1060, 1067 (9th Cir. 2013) (en banc). For a court to find that employee discipline violates the First Amendment, the first and third question must be answered in the affirmative, the fourth and fifth question answered in the negative, and for the second question, the employee must have spoken as a private citizen. The employee also has the burden to prove the first three factors. If the employee is successful, then the burden shifts to the government to prove the fourth and fifth factors.
Applying this test to employee social media conduct isn’t simple, but it helps government employers assess whether the First Amendment counsels against disciplinary action.
Complaint in the Mamuad lawsuit
Motion for TRO in Mamuad lawsuit (w/o attached declarations and exhibits)