What governments can learn from Google's and YouTube's ethical fail
Peppa Pig weeps as a dentist shoves a needle into her mouth, then screams as he brutally extracts her teeth.
If your child uses YouTube without supervision, there’s a good chance they have watched this animated video. Or the one where Peppa is attacked by zombies, in the dark. Or the one where Frozen’s Elsa is burned alive.
These videos will have been interspersed between other disturbing videos featuring bizarre, repetitive footage strung together by algorithms rather than human content creators. These meaningless fever-dreams show eggs being unwrapped by a disembodied set of hands, or costumed superhero characters with unsettling faces marching across the screen.
They are off-putting, and intuitively not suitable for children.
Such videos, from the most violent to the most low-quality - carefully game YouTube’s algorithm to target pre-schoolers and have done so since 2014. They earn large sums of money for the perpetrators who upload them, generating millions of views from their target audience. YouTube’s parent company, Google, also benefits from the advertising revenue they generate.
The problem has been called Elsagate, a neologism based on an early example of the problem involving (again) Frozen’s beloved character Elsa.
Governments and the ethical challenges of analytics and AI
Governments around the world have been developing more sophisticated operational analytics to deal with volumes that have gone beyond what a purely human workforce can manage. The policy motive for democracies like Australia is generally in keeping with government’s role of ensuring the right people get the right services at the right time, that community safety is upheld, or that people and organisations stay compliant with regulation.
The tools government is using - sophisticated algorithmic logic and more recently, specialised AIs and machine learning - have been part of the private sector’s toolkit for a while, helping keep us buying more goods or (in YouTube’s case) keeping us glued to our iPad screen as the software works out what makes us tick and what will keep us engaged. As governments’ digital transformation agendas continue to tick along, these same tools are now being put to use by the public sector for pro-social motives.
Controlling the bad stuff
YouTube, as far as we know, relies on a handful of methods to identify and de-monetise or remove “bad” videos. These include:
Algorithms, which pick up and flag many but not all violations
User reporting of bad content – and for Elsagate, this means that by the time an adult sees and chooses to report a video, it has probably been watched by dozens to tens-of-thousands of young children
A human workforce addressing violations, reportedly numbering around 10,000 humans
So why was I able to open YouTube and, in two reasonably benign searches over 60 seconds, find a channel with horrific pre-school-targeted animations with thousands of views on some of the worst videos (“story for kids”)?
The answer is that these measures are a finger in the dyke of a problem woven into the foundations of its business model. YouTube has created a platform that priortises getting as much content up as quickly as possible. It has used automated moderation as a primary treatment of the problem of flagging bad videos. And this recipe made Elsagate a catastrophic ethical failure that just doesn't have a good solution without unpicking those foundations.
Imperfect solutions to a terrible problem
After it was taken down, YouTube argued that it was impossible to have humans policing its Trending pages in every country because of the sheer volume of videos that cycle through every hour.