You’ve adopted a social media policy after hearing all the warnings about employees behaving badly on social media. But do you enforce the policy consistently? Failure to do so can be risky business, as illustrated by a recent federal court decision, Redford v. KTBS, LLC, 2015 WL 5708218 (W.D. La. Sept. 28, 2015). The court in Redford allowed an employment discrimination claim to continue because of management’s uneven enforcement of its social media policy.
The social media policy of KTBS, a Louisiana TV station, instructs employees not to respond to viewer complaints on social media. Chris Redford, an on-air crime reporter for KTBS and a white male, posted a negative comment on his Facebook page in response to a viewer’s comment on a KTBS story. Redford was fired for violating the KTBS social media policy.
Redford sued KTBS for race and sex-based employment discrimination. Redford pointed to KTBS’ treatment of two other employees for their social media conduct. Lee, an on-air personality and an African-American female, responded multiple times to negative viewer comments on the official KTBS Facebook page. She received numerous warnings from management before being fired on the same day as Redford. Sarah Machi, an on-air personality and a white female, responded negatively to a KTBS viewer’s comment on her personal Facebook page, but received no warning or discipline. Based on this evidence, Redford argued that KTBS fired him not for violating the social media policy, but to prevent a potential lawsuit by Lee for race or sex discrimination. According to the court, Redford had a viable claim that he was treated less favorably than Lee and Machi because of his race or sex.
KTBS argued that it took no action against Machi because she posted her comments on her personal Facebook page, which was set to “private” so that only her Facebook friends could access it. Redford’s Facebook page did not have privacy filters turned on, and he often used his page to promote his work at KTBS. Since KTBS apparently considered comments posted on an employee’s “private” Facebook page to be outside the scope of its social media policy, the court reasoned that KTBS’ stated reason for firing Redford could be pretextual if Redford’s Facebook page was considered “private.” This issue had to be resolved at trial, so the court denied summary judgment to KTBS on the pretext issue.
Redford is a good reminder of the importance of consistent enforcement of social media policies. Even-handed enforcement is made easier by clearly spelling out the scope of the policy. If the policy makes a distinction between “company” and “personal” pages, for example, describe the specifically and consider providing examples. Ambiguity and inconsistency are your worst enemies when it comes to enforcing a social media policy.
Have you ever been tempted to delete a social media message you posted that exposes you or your company to liability? That post that seemed like a harmless joke but now could turn into evidence in a wrongful termination lawsuit. Or that photo that could cast you in an unflattering light. If it ever crossed your mind that no one will notice if you simply pressed the “delete” button, here’s a case illustrating why succumbing to the temptation doesn’t end well.
In Crowe v. Marquette Transportation Company, Gulf-Inland, LLC, 2015 WL 254633 (E.D. La. Jan. 20, 2015), Brannon Crowe sued his employer, Marquette, for injuries he sustained due to an accident that allegedly occurred at work. Marquette discovered a Facebook message Crowe had allegedly sent to a co-worker in which he admitted injuring himself while fishing. This prompted Marquette’s lawyers to serve Crowe with a discovery request for a complete copy of Crowe’s Facebook history.
Crowe’s response to the request was that he didn’t “presently” have a Facebook account. When confronted in his deposition with a printout of a Facebook message that appeared to have been sent from an account with the username “Brannon CroWe,” Crowe claimed that he stopped having a Facebook account around October 2014, and that his account had been hacked. To substantiate his hacking claim, Crowe pointed out rather unconvincingly that, unlike the username on the printout, there’s no capital “W” in his name.
Crowe wasn’t entirely forthcoming. Although Crowe was technically correct that he didn’t have an active Facebook account when he responded to the request in December 2014, the truth was that Crowe deactivated his Facebook account four days after receiving the discovery request in October 2014. To make things worse for Crowe, data in a deactivated Facebook account isn’t deleted. A deactivated Facebook account can be reactivated at any time. Needless to say, the court was displeased with Crowe’s attempts to evade discovery. The court ordered Crowe to provide Marquette with his entire Facebook account history and the login information for all his Facebook accounts.
Although Crowe involved an employee who tried to hide unhelpful social media information, the lessons from the case apply equally to employers. Deactivating a social media account doesn’t necessarily shield information in the account from discovery because the information is probably still available. Deleting a social media account also doesn’t always mean the information in the account is gone forever. It’s not unusual for social media providers to store deleted user data in its servers before permanently deleting the information. And even if social media information is truly deleted, that in itself can be problematic. A person (or company) has a duty to preserve evidence that’s relevant to reasonably anticipated litigation. Violating the duty to preserve can lead to unpleasant consequences, including court sanctions.
Learn from Crowe’s example. The next time you’re tempted to dispose of an incriminating Facebook post, deactivate the temptation, not your Facebook account.
Say you’re the president of Diamond Staffing Services. One morning, your phone is flooded with Twitter notifications. A few taps leads you to the source of the buzz: Someone opened a Twitter account parodying your company’s name and tweeted: “Work for Diamond? Pregnant = fired. We’re Diamond – we don’t care, LOL!” The tweet links to your company’s official Twitter account. Livid, you instruct your attorney to file a defamation lawsuit. Not so fast, your attorney says. First, you need to know who you’re suing, and the Twitter account was probably opened using fake information. What do you do?
This scenario is becoming more common as disgruntled employees and customers take to social media sites to air their grievances. Such users often post anonymously, and they have a First Amendment right to do so. To discover the identity of anonymous users, one must overcome First Amendment protections for anonymous speech.
A recent case illustrates the challenges of suing for defamation based on anonymous online statements. In Music Group Macao Commercial Offshore Ltd v. Does, 2015 WL 75073 (N.D. Cal. Mar. 2, 2015), a Washington-based company (Music Group) alleged that the defendants used anonymous Twitter accounts to defame the company and its CEO. Among other things, the anonymous users tweeted that Music Group “designs its products to break in 3-6 months,” “encourages domestic violence and misogyny,” and that its CEO “engages with prostitutes.” Music Group originally subpoenaed Twitter in Washington to reveal “the name, address, email address and any proxy address” of owners of the accounts. Twitter, which is based in San Francisco, did not agree to have a court in Washington decide wither it had to comply with the subpoenas. Music Group then filed a miscellaneous proceeding in the district court in the Northern District of California to enforce the subpoenas.
The district court initially granted Music Group’s motion to enforce the subpoena, but after reviewing an amicus brief filed by Public Citizen, Inc. (a public interest law firm), the court corrected its order and denied the motion. The court first took stock of the various tests used by courts in analyzing First Amendment protection of anonymous online speech. The court chose to apply a test that focuses on the nature of the speech. Under that test, a party seeking to discovery the identity of an anonymous speaker must first persuade the court that there is a “real evidentiary basis” for believing that the defendant has engaged in wrongful conduct that has caused real harm to the plaintiff’s interests. If the plaintiff makes this showing, then the court must weigh the harm to the plaintiff caused by allowing the speaker to remain anonymous versus the harm to the speaker’s interests in anonymity.
The court ruled that the tweet stating that Music Group “designs its products to break in 3-6 months” was legitimate commercial criticism, which is protected by the First Amendment. The tweet directed at Music Group’s CEO personally could not support a defamation claim brought by Music Group. The tweet alleging that Music Group “encourages domestic violence and misogyny” could be defamatory, the court noted, but there was more to it than just the words. The tweet linked to a video commercial promoting an audio mixer sold by Music Group. The commercial shows a man using the audio mixer to rebuff a woman’s demands that he stop working and come with her to a social function. The video was comedic in nature. Understood in context, the tweet was “joking and ironic” and did not “fall outside the First Amendment for being in poor taste,” the court wrote. The court ultimately decided that the balance of harms did not justify enforcing the subpoenas.
Music Group highlights some of the questions one should ask before launching into a lawsuit against an anonymous online poster:
- Do I have legitimate claims? You’ll need some evidence to support your claims to overcome the speaker’s First Amendment right to anonymity.
- Where do I find the identifying information? Typically, you’ll need to ask the owner of the website where the offending comments were posted. Sometimes that’s not enough because the user might have set up the account using a fake name and email address. In that case, you need to get other identifying information like the IP address of the user, determine the Internet Service Provider (ISP) associated with that IP address, and ask the ISP to disclose the user’s account information.
- How do I get the identifying information? A subpoena is typically the tool of choice. The rules governing subpoenas can be highly technical, so consulting an attorney is advisable. For example, in Music Group, Twitter, which is based in San Francisco, refused to comply with an order enforcing the subpoena issued by a Washington court. The plaintiffs in the case had to open a special proceeding in California to enforce the subpoena.
Working through these questions will help you determine if it’s worth suing an anonymous online speaker.
In the last few years, we’ve seen how the private social media activity of employees can get employers in trouble for violating a variety of laws. The National Labor Relations Act. HIPAA. Title VII. Now you can add the Americans With Disabilities Act (ADA) to the list.
In Shoun v. Best Formed Plastics, Inc., 2014 WL 2815483 (N.D. Ind. June 23, 2014), a federal judge held that an employer may be liable under the ADA for an employee’s Facebook comments about the medical condition of a co-worker. George Shoun, an employee at Best Formed Plastics, sustained a workplace injury and took leave to recover. Shoun’s co-worker, Jane Stewart, learned about his injury because she processed his worker’s compensation claim and monitored his medical treatment for the company. Stewart posted this snarky message on her personal Facebook account: “Isn’t [it] amazing how Jimmy experienced a 5 way heart bypass just one month ago and is back to work, especially when you consider George Shoun’s shoulder injury kept him away from work for 11 months and now he is trying to sue us.”
Shoun sued the company, alleging that Stewart’s post made it liable for violating the ADA. According to Shoun, the post was visible to the business community. Shoun claimed that prospective employers refused to hire him because of the post, causing him emotional distress and mental pain and suffering.
The court refused to dismiss the ADA claim against the company, reasoning that Stewart obtained the information through an employment-related medical inquiry and then wrongfully disclosed it. As a result, Shoun could sue for violation of Section 102 of the ADA, which provides that any information relating to a medical condition of an employee obtained by an employer during “voluntary medical examinations, including voluntary work histories, which are part of an employee health program available to employees at that work site,” must be “collected and maintained on separate forms and in separate medical files and [be] treated as a confidential medical record.” Moreover, the company could be liable for Stewart’s actions even though she posted the message on her private Facebook account in her own time.
Shoun is another reminder of how easily the lines between personal and professional conduct can get blurred on social media. Employers must train their employees about what they may and may not disclose on social media. It is almost never proper for an employee to share medical information obtained at work on his or her personal social media account. The confidential nature of medical information needs to be emphasized especially when training employees who handle workers’ compensation claims, medical leave requests, billing for health services, FMLA claims, etc.
Employees can get carried away on social media. US Airways learned this the hard way when its employee responded to a customer complaint on Twitter with an obscene picture of a woman and a toy jet. An apology and deletion of the tweet followed an hour later (an eternity in cyberspace). US Airways claims its employee made an “honest mistake,” and the incident has not spawned a lawsuit, but one can imagine situations in which the malicious online statements of an employee land the employer in legal trouble.
So what’s an employer to do? Thankfully, employers can find some solace in Section 230 of the federal Communications Decency Act (“CDA”), as a recent Indiana case illustrates. In Miller v. Federal Express Corp., an employee of a non-profit organization, 500 Festival, Inc. (“500 Festival”), and an employee of FedEx separately posted comments on media websites criticizing the plaintiff’s leadership of Junior Achievement of Central Indiana, which he ran from 1994 to 2008. Although the employees posted the comments using aliases, the plaintiff traced the comments back to IP addresses assigned to 500 Festival and FedEx and sued them for defamation.
The Indiana Court of Appeals affirmed the trial court’s dismissal of the defamation claims against 500 Festival and FedEx based on the Section 230 of the CDA. Congress passed Section 230 to protect companies that serve as intermediaries for online speech from liability for harmful content posted by third parties. A defendant claiming Section 230 immunity must show that: (1) it is a provider or user of an interactive computer service; (2) the plaintiff’s claim treats it as the publisher or speaker of information; and (3) another information at issue was provided by another content provider. Satisfying these three elements immunizes the defendant from suit, although the author of the offensive content could still be held liable.
It’s not difficult to see how Section 230 applies where, for instance, the operator of an online discussion forum is sued for defamation based on a comment posted by a forum member. The operator easily qualifies as an “interactive computer service” and can argue it is not liable for content that someone else published. But could a corporate employer qualify for Section 230 immunity? The court in Miller said yes, siding with precedent set by California and Illinois courts. An employer that provides or enables multiple users on a computer network with Internet access qualifies as a provider of an interactive computer service. Since the defamation claims tried to hold 500 Festival and FedEx liable for allegedly publishing statements made by their employees, Section 230 barred the claims.
Controlling what employees say online can be a daunting task, but it’s nice to know that employers have some protection from legal liability for the “honest” (or not so honest) mistakes of employees.