YouTuber Logan Paul has set off a new conversation around what exactly social media sites should be doing with damaging, hurtful, or violent content. After his聽viral video showing a recently deceased person in Japan鈥檚 鈥渟uicide forest鈥 was removed by the star himself, and he got busted monetizing his apology video, YouTube聽finally聽did something about Paul鈥檚 videos,聽pausing any future work with the viral prankster from their website entirely.

But the conversation surrounding Paul鈥檚 YouTube exit is just another in a long series of missteps, blunders, and flat-out mistakes social channels have done (and continue to do) while users create content that is either wrongfully deemed offensive or聽is actually very offensive and just sits online for anyone to see. This raises the question: Is it the responsibility of sites like YouTube, Facebook, and Twitter to step in when their users鈥 content crosses the line? And, who gets to decide what that line even is?

That debate has been a huge issue since way before the new year鈥檚 Logan Paul fiasco. Starting back in 2015, Instagram came under fire for removing聽pics of women鈥檚 nipples. The company insisted that a woman鈥檚 nipple was too sexual, and therefore, deemed any pic with any version of a nipple as too risqu茅. The problem with this decision, however, was that nursing mothers were being kicked off of their accounts for sharing photos of breastfeeding their babies聽鈥 with nary a nipple in the pic. Because of this ban, #FreeTheNipple was born.

The idea was to desexualize nipples, especially because breasts are designed to feed babies. While Instagram still often deletes breastfeeding pics, #FreeTheNipple has become more normalized over the last few years (there are nearly聽four million pics on the site聽that feature the hashtag!). It鈥檚 Instagram鈥檚 parent company Facebook, however, that聽continues to remain problematic in the face of potentially damaging content.

While women聽are getting banned from using Facebook for declaring 鈥men are scum,鈥 the site has been under the microscope recently for allowing white supremacist and other hate speech to reign freely. Since the 2016 election, Facebook and its CEO Mark Zuckerberg have been under increased scrutiny for the mishandling of how聽false or otherwise damaging information gets out to the masses.

The German government took Facebook鈥檚 leniency into their own hands by passing a law that forces the company to remove聽hurtful content within 24 hours or face fines up to $60 million USD.聽And although Facebook already has a reporting system in place for hurtful language and content,聽ProPublicaindependently researched the聽social platform鈥檚 speech monitoring system and found it spotty at best.聽The investigation found that of 49 posts, 22 were incorrectly handled by the company鈥檚 internal reporting system.

Facebook isn鈥檛 the only massive media outlet to mishandle content restrictions. Before YouTube made the decision to sever ties with Paul, the company established an age restriction on聽LGBTQ+ videos聽while Paul鈥檚 insensitive antics netted the聽prankster millions in advertising dollars.

The company has also come under fire for allowing what some call 鈥渉orrific鈥 transphobic advertising on the site聽(YouTube removed the ads only after being dragged on social media). In both that instance and the recent Logan Paul fallout, YouTube has been criticized for not acting until the company came under fire from users. But does the company have to answer at all?

In a world where corporations are being held to higher standards聽by consumers on social media, many companies implement protections that will ostensibly protect consumers and staff from being targets of hate. But, without explicitly outlining regulations to ensure human error doesn鈥檛 come into play when making decisions around potentially harmful content, companies like YouTube, Facebook, and Twitter will often remove anti-racist or anti-sexist posts while leaving bona fide hate speech online.

As with any company, Facebook, YouTube, and others can run and manage their sites in whatever way they choose, but social media, in general, has created a way for people to organize, mobilize, and demand change, something that has bitten these companies in the behind before. So, while they technically don鈥檛聽have聽to do anything, its probably better in the long run that they do.

What do you think about how social networks target hate speech? Let us know @BritandCo!聽

(Photos via Emma McIntyre/Getty Images for iHeartMedia; Chris Jackson/Getty)