YouTube Cut Ties With Logan Paul: Here’s What Should Happen Next
YouTuber Logan Paul has set off a new conversation around what exactly social media sites should be doing with damaging, hurtful, or violent content. After his viral video showing a recently deceased person in Japan’s “suicide forest” was removed by the star himself, and he got busted monetizing his apology video, YouTube finally did something about Paul’s videos, pausing any future work with the viral prankster from their website entirely.
But the conversation surrounding Paul’s YouTube exit is just another in a long series of missteps, blunders, and flat-out mistakes social channels have done (and continue to do) while users create content that is either wrongfully deemed offensive or is actually very offensive and just sits online for anyone to see. This raises the question: Is it the responsibility of sites like YouTube, Facebook, and Twitter to step in when their users’ content crosses the line? And, who gets to decide what that line even is?
That debate has been a huge issue since way before the new year’s Logan Paul fiasco. Starting back in 2015, Instagram came under fire for removing pics of women’s nipples. The company insisted that a woman’s nipple was too sexual, and therefore, deemed any pic with any version of a nipple as too risqué. The problem with this decision, however, was that nursing mothers were being kicked off of their accounts for sharing photos of breastfeeding their babies — with nary a nipple in the pic. Because of this ban, #FreeTheNipple was born.
The idea was to desexualize nipples, especially because breasts are designed to feed babies. While Instagram still often deletes breastfeeding pics, #FreeTheNipple has become more normalized over the last few years (there are nearly four million pics on the site that feature the hashtag!). It’s Instagram’s parent company Facebook, however, that continues to remain problematic in the face of potentially damaging content.
While women are getting banned from using Facebook for declaring “men are scum,” the site has been under the microscope recently for allowing white supremacist and other hate speech to reign freely. Since the 2016 election, Facebook and its CEO Mark Zuckerberg have been under increased scrutiny for the mishandling of how false or otherwise damaging information gets out to the masses.
The German government took Facebook’s leniency into their own hands by passing a law that forces the company to remove hurtful content within 24 hours or face fines up to $60 million USD. And although Facebook already has a reporting system in place for hurtful language and content, ProPublica independently researched the social platform’s speech monitoring system and found it spotty at best. The investigation found that of 49 posts, 22 were incorrectly handled by the company’s internal reporting system.
Facebook isn’t the only massive media outlet to mishandle content restrictions. Before YouTube made the decision to sever ties with Paul, the company established an age restriction on LGBTQ+ videos while Paul’s insensitive antics netted the prankster millions in advertising dollars.
The company has also come under fire for allowing what some call “horrific” transphobic advertising on the site (YouTube removed the ads only after being dragged on social media). In both that instance and the recent Logan Paul fallout, YouTube has been criticized for not acting until the company came under fire from users. But does the company have to answer at all?
In a world where corporations are being held to higher standards by consumers on social media, many companies implement protections that will ostensibly protect consumers and staff from being targets of hate. But, without explicitly outlining regulations to ensure human error doesn’t come into play when making decisions around potentially harmful content, companies like YouTube, Facebook, and Twitter will often remove anti-racist or anti-sexist posts while leaving bona fide hate speech online.
As with any company, Facebook, YouTube, and others can run and manage their sites in whatever way they choose, but social media, in general, has created a way for people to organize, mobilize, and demand change, something that has bitten these companies in the behind before. So, while they technically don’t have to do anything, its probably better in the long run that they do.
What do you think about how social networks target hate speech? Let us know @BritandCo!
(Photos via Emma McIntyre/Getty Images for iHeartMedia; Chris Jackson/Getty)