In the recent news of Cleveland man Steve Stephens (aka the Facebook murderer) uploading a Facebook video of himself shooting and killing an innocent 74-year-old man, many questions remain. One of them: How do we prevent future acts of violence from being broadcast on social media?
Facebook CEO Mark Zuckerberg weighed in on the tragedy from Facebook's F8 developer conference in San Jose, California on Tuesday, saying that the company has work to do as far as preventing violent videos from streaming on the social media network.
“We have a lot more to do here,” Zuckerberg said. “We are reminded of this week by the tragedy in Cleveland.”
“We will keep doing all of what we can to keep tragedies like this from happening.”
While Zuckerberg's words were meant to be reassuring, the vague nature of them is still a bit offputting. Deb Gabor of Sol Marketing, a brand strategy consultancy that works with household names like Dell, Microsoft and NBC Universal, says that if Facebook wants to become the full-fledged media company it promised its users it would be (rather than a mere communications platform), it has a duty to be responsible in the type of content it creates.
"Zuckerberg and Facebook have been very open about the company's desire to be viewed as a media company that creates, spreads, and encourages users to engage with content globally, without limitations," Gabor said. "However, with that positioning comes responsibility. That means that, in an effort to honor its brand promise to customers, Facebook will need to step up their efforts to define content standards and how to enforce them. If they don't, it's an open invitation for the courts and legislators to step in and interpret and make new rules."
So how exactly can Facebook define and enforce its content standards?
It's true that Facebook has a team of responders ready to take action against any violent or inappropriate videos, but they rely on user reports about said videos in order to take action.
In Stephens' case, there were a total of three videos. Two were uploaded: one portraying Stephens' intent to kill, the other showing the actual act of murder. The third video was a live stream of Stephens, who has since shot and killed himself in Erie, Pennsylvania after a pursuit by police, confessing to the crime as well as other unconfirmed shootings.
The third video, the Facebook Live video, was reported by a user soon after it ended. However, the video of Stephens committing the shooting was not reported until almost two hours after it was uploaded. The first video with Stephens' intent to kill was never reported.
That means that anyone within Stephens' privacy settings who did see the video of the shooting in those 108 minutes didn't report it.
It took Facebook 23 more minutes after the video of the shooting was reported to deactivate Stephens' account and take down his videos.
This isn't Facebook's first time inadvertently broadcasting a gruesome crime. Since March of last year, at least 10 crimes, including gang rape and murder, have been broadcasted via Facebook, whether a live stream or an uploaded video. In January, four were charged with hate crimes after kidnapping and torturing an 18-year-old with special needs and broadcasting the whole thing in a 28-minute Facebook Live video. Even suicides have been reported as taken place via Facebook Live — they're especially common among teens.
The fact of the matter is that the more violent acts that are hosted on Facebook, the more violence is perpetuated in our culture. As violent acts become more common, not only is the previously safe space of Facebook and other social media criminalized or terrorized, we're also slowly becoming desensitized to the horror of it all. And if you see higher crime rates on social media, you can be sure you'll see higher crime rates in real life.
While Zuckerberg said that a big focus going forward for Facebook would increasingly work on strengthening communities and civic engagement, a more actionable approach also seems apparent.
Will Facebook fine-tune its algorithm to use artificial intelligence to expedite its review process? It seems unlikely that the social media kingpin would institute any kind of advance screening before allowing users to share videos, simply because of the amount of content submitted each day. Plus, how do you moderate a service like Facebook Live, where the novelty lives in its spontaneity?
The answer might lie in online safety courses or programs that allow Facebook users to recognize signs of violence in advance.
For example, after the Sandy Hook massacre four years ago, two parents who both lost a child that day founded Sandy Hook Promise, an organization that focuses on preventing gun violence in schools. Programs like "Start With Hello" and "Say Something" train students to recognize signs of social isolation and reach out to their peers and administrators if they see signs of violence. It's similar to how the FBI trains law enforcement to recognize a potential gunman's behavior and predictability.
If Facebook were to implement training courses in online safety similar to Sandy Hook Promise, would there be less violence broadcast on the social medium? Or will Facebook go about fixing the issue another way, perhaps in making it easier for troubled Facebook users to find help and making it harder for violent media to be shared?
Time will tell when it comes to getting answers, but it's clear that something must be done. Innocent people are getting hurt, both directly and indirectly. Until then, we look forward to the measures that will be taken by Facebook and Zuckerberg.
[H/T Twitter / @cnnbrk, Getty / Justin Sullivan /, Twitter / @realskelton]