YouTube sent their content moderators home from the office to keep them safe during the COVID-19 pandemic, relying on machine learning to handle the demonetization or removal of inappropriate videos in their absence. As most content creators know too well, machine learning doesn’t always do a great job in flagging videos.
Onsite moderators must review flagged videos to determine if machine learning made a mistake. Every correction provides feedback for machine learning to refine its decision-making process when flagging future videos. Wrongly flagged videos will remain unmonetized or offline until an engineer gets around to reviewing them.
The outcry from creators was: “Why can’t the moderators work from home?!”
YouTube released a video explaining that “video reviewers” can’t work from home because their work is sensitive and/or some areas of the world don’t have the right technical infrastructure. An explanation that didn’t reveal the whole truth. Having worked at Google before and after the Great Recession, I can tell you why moderators can’t work from home.
YouTube recently announced that their content moderators were being sent home from the office and automated AI systems will handle the removal of inappropriate videos. As most content creators know too well, the AI doesn’t always correctly remove inappropriate videos. The moderators are the ones who correct the mistakes that the AI often makes from a lack of context. With the moderators gone, removed videos will remain offline until someone gets around to reviewing them.
The outcry from content creators was, “Why can’t the moderators work from home?”
Although YouTube released a video explaining this policy change, it didn’t fully address the work from home question. Having worked at Google before and after the Great Recession ten years ago, I can tell you why the moderators can’t work from home. It’s not a technical issue, it’s the business model.
If you haven’t heard from the Internet, I’ve quit YouTube — for the rest of the year. No new videos until 2020. I’ve already lost three subscribers for not putting out a video on Wednesday. I may lose more subscribers when I don’t put out a video this Sunday. Or YouTube could be deleting closed accounts that are also subscribed to my channel. Meanwhile, I’m busy re-branding my channel and website. Both have the new header graphics. See you in 2020!
Updated 29 December 2019: Looks like YouTube had a glitch with the subscriber count being lower in the backend than the front end. The three subscribers that I “lost” since Christmas have come back.
After news broke that pedophiles were using time codes in comments to tag “sexually suggested” content in family friendly videos on YouTube, I kept a close watch on my only video with young children in a musical performance at a public event that had previously attracted the wrong kind of audience six months earlier.
After I saw the initial traffic spike for the video while watching the real time analytics for my channel at 5:30PM on Friday, February 22, 2019, I disabled the comment section for that video as a precaution.
I went to the tech news website that my dedicated band of trolls called home and located the video URL in an anonymous comment for an article about YouTube’s latest child safety problem. Readers were asked to report me as a pedophile because a little boy had his “peewee” hanging out at a specified time code in the video.
The video shows a little boy with his hands over the front of his t-shirt and the waistband of his shorts while running around.
One comment called the anonymous comment a perfect example of the nonsense that the YouTube community is struggling with. Another comment told the troll to get psychiatric help.
After I sent an email to the CEO of the tech news website at 5:45PM, the entire thread got deleted 15 minutes later. The traffic spike ended with 14 new views for the video. Each view represents a person who wanted to see a little boy with his “peewee” handing out, either out of curiosity that such content exist or hopeful that such content was real.
Did four AI robots kill 29 scientists at a Japanese weapons research lab? A tweet with that headline and a linked video briefly lit up social media for one week in December 2018. The mainstream media didn’t even bother to cover it, suggesting a possible government cover-up. Like most people who saw the story, I thought it was too sensational to be true. When I saw the thumbnail for the linked video, I knew it was fake news.