As news unfolded about Tuesday’s YouTube shooting, a chilling motive emerged. Ahead of the incident, the alleged shooter had posted videos maligning the service—doing so as a former money-making user of the site.
“I’m being discriminated [against] and filtered on YouTube, and I’m not the only one,” alleged shooter Nasim Aghdam said in a video that was shared after her identity as the shooting’s current, sole fatality was revealed. “My workout video gets age-restricted. Vegan activists and other people who try to point out healthy, humane, and smart living, people like me, are not good for big business. That’s why they are discriminating [against] and censoring us.”
The shooting has put a massive spotlight on this topic, which, up until now, has been more likely to appear in angry YouTube videos than on major newspaper headlines. But well before this shooting, Aghdam was just one of many voices on the site to cry foul about YouTube’s policies.
Thus, it’s time to put some perspective on a topic that has become quite inflamed in the past 24 hours.
We can start by rewinding to roughly one year ago when YouTube and its parent company, Google, began facing a public backlash from advertisers. This “adpocalypse” arguably became most inflamed when the UK government froze all ad-spending on YouTube in March 2017 after finding that its ads had been slapped onto “extremist videos” without the government’s consent. Other major advertisers, American and abroad, followed suit. More alarmingly for Google, advertisers kept doing so after YouTube offered assurances of changes and overhauls.
The proposition for advertisers was simple: so long as their ads might appear on questionable videos of any kind, they made the smart business move of bailing.
YouTube’s response was equally extreme: a sweeping expansion of its “age-restricted” video designation and, thus, a “demonetization” slap on any videos that fell on the wrong side of that label. To review, that designation includes, but is not limited to, the following:
- Vulgar language
- Violence and disturbing imagery
- Nudity and sexually suggestive content
- Portrayal of harmful or dangerous activities
Should YouTube decide that a video runs afoul of the site’s age-appropriate tag, that video is immediately ineligible for pre-roll ads—which is YouTube’s primary way to serve income to its creators.
On its face, this was meant to look like a win-win-win. Ads would be narrowly targeted to the giant site’s “safest” content, and YouTube creators could still post whatever they wanted, even if one-off videos pushed the service’s Terms of Service to their limits. (Those four descriptors in the age-restricted list do not technically violate YouTube’s TOS.) YouTube could continue hosting the bazillions of videos being uploaded every minute and keep advertisers in a protective silo.
Crushing potential revenue?
But, then, how is YouTube supposed to keep all of that content straight while protecting advertisers? By hiring hundreds of thousands of moderators to individually review every upload? The best gauge we have for YouTube’s actual manpower comes from the company’s promise of “10,000” moderators working in the wake of a potentially offensive video posted by Logan Paul in January. Of course, that incident was mostly a reminder that even thousands of moderators can let questionable content through.
More crucial for today’s conversation is the fact that many of these moderators, according to YouTube’s statements at the time, had their gaze focused on “Google Preferred” content creators as opposed to smaller content creators who may be more likely to face automated filtering scrutiny. YouTube has not been forthcoming about its exact moderation policies, but by mid-2017, YouTube creators already began recognizing apparent auto-moderation trends.
The most obvious ones came from videos about video games. YouTube creators tested this in 2017 by uploading clips with screens, videos, and phrases about certain gun-loaded games to see which clips would be flagged as “violent” (and, thus, age-restricted). One of the earliest high-profile examples came when YouTube personality TotalBiscuit asked followers in August 2017 to try and provoke YouTube’s automatic demonetization filters. Other creators claim that YouTube’s policies have been wildly inconsistent when processing everything from gun images to phrases found in games like “kill streak.”
Affected creators began turning to crowd-funded models like Patreon in order to keep their YouTube careers alive—meaning, they’re still relying on YouTube’s massive server backend to host content but have given up on YouTube’s combination of unpredictable, automatic demonetization strikes and an inconsistent, slow appeals process. YouTube creators can appeal any age-restriction tag, but as Kotaku reported last year, that five to seven day process can crush potential revenue for a video about current events or the latest games, especially if a majority of its views come in those first five to seven days.
YouTube’s automatic filters have wreaked demonetization havoc through a wide swath of video types, including those about conservative politics and LGBTQ issues. However, keeping track of which videos are impacted (and for how long) is itself quite difficult, owing to how many channels may be temporarily hit only to have those strikes reversed after an inefficient reviews process. The above-linked video about LGBTQ videos, for example, was itself demonetized when it was uploaded; it has since been whitelisted for ads.
One video made by alleged YouTube HQ shooter Aghdam, which was successfully archived before most of her online presence was wiped, focused primarily on YouTube flagging a video she’d recently made. Her complaint video included footage of the demonetized video, which showed a fully clothed Aghdam working out via sit-ups and leg lifts, as well as an allegation that YouTube rejected her appeal, telling her that the video was “inappropriate.”
Because the archived video doesn’t include a full, unedited version of the cited clip, it’s hard to determine whether it contained clear violations of YouTube’s rules. YouTube did not respond to Ars’ questions about Aghdam’s YouTube channel.
YouTube, then, is in a similar boat with other sites whose reputations revolve around user-generated content. The streaming site can choose to compensate its creators however it sees fit or risk losing them in the process. But even after Aghdam’s story becomes better explained via official police investigations, her video’s specific gripes—that automated moderation leaves the site’s creators in an unsustainable position—will likely not go away.
“Make your butt sexy”
[Update, 6:24pm ET: Shortly after this article’s publication, YouTube analytics firm ChannelMeter reached out to Ars Technica with YouTube viewership data estimates. According to ChannelMeter’s unofficial estimates, Aghdam had racked up nearly 9 million video views across three channels since roughly 2013. (ChannelMeter said it was unable to track data on a fourth channel she apparently controlled.) The data calculations included accurate references to Aghdam videos that now only exist via Google cache searches.
Most of her top-performing videos were uploaded with titles in Farsi script, and an immediate scan of the titles revealed that all three of her top-performing videos included the word “سکسی,” a Farsi transliteration of the word “sexy” in the context of workout and exercise videos. (“Make your butt sexy and bigger with exercises” was one example.) Another video, described as a “dance video,” has a title that directly translates to “Nasim Sabz balloon breasts.” Nasim Sabz was Aghdam’s nickname for that account.
These titles do not clarify the exact content found in her videos, but they do suggest that Aghdam’s account could have been a perfect-storm candidate for either innocent or rule-breaking content, with views in the millions, being demonetized by YouTube’s auto-moderation bots.]