I just heard of this from my brother, whose children are terrified of it. It’s so grotesque.
So essentially, this “Momo Challenge” made its way onto YouTube Kids videos (much like the self harm one you might’ve seen going around). In the same manner, it comes up about halfway through a video, when parents are more than likely not paying attention, and tells the kid they can’t tell anyone about the challenges or their friends/family/themselves will be killed or seriously hurt. Then it gets into the challenges.
They start out simple. Break a toy, make a mess of your room, etc. But then they gwt more and more disturbing. Things like self harm and suicide make their way into the challenges, once the kids have become invested.
And I need to stress this: they aren’t allowed to mention this to anyone under threat of harm or death to the people they love.
My niece burst into tears when my brother asked. My nephew, who’s 13, was scared out of his mind and begged my brother to drop it, because he can’t talk about it.
My brother is cutting his kids off from YouTube, and if you have children, I strongly suggest you do the same. YouTube Kids is no longer safe. Period.
Even if you don’t have kids, please, by god, reblog and share. People are ending their lives. They’re scared senseless. YouTube needs to own up and do something, and until then we need to protect the children.
So if you’re easily disturbed by creepy images, don’t look it up. I just did and the screenshots immediately triggered an anxiety response and I am terrified. And I’m an adult.
woooooow
I mean, they denied the presence because it doesn’t actually exist.
The Atlantic has a good article about it (https://www.theatlantic.com/technology/archive/2019/02/momo-challenge-hoax/583825/). Warning that it does have a picture of the statue that is being used as the “face” of the challenge, and the statue itself may freak people out. The article also talks about how this sort of thing gets started, which may be interesting to some people.
There are things on YouTube to be worried about: that their algorithms end up promoting extremist views and that their moderation systems don’t properly detect things. But this “challenge” isn’t one of them.