close
close

“I have moderated hundreds of horrific and traumatizing videos”

Getty Images A man looks at a computer screen reflected in his glasses Getty Images

Social media moderators look for disturbing or illegal photos and videos and then remove them

In recent months the BBC has explored a dark, hidden world – a world where the worst, most horrific, disturbing and in many cases illegal online content ends up.

Beheadings, mass murders, child abuse, hate speech – all end up in the inboxes of a global army of content moderators.

You don’t see or hear about them often — but they’re the people whose job it is to review and then, if necessary, delete content that’s either reported by other users or automatically flagged by tech tools.

The issue of online safety is increasingly coming to the fore as technology companies face greater pressure to quickly remove harmful material.

And while a lot of research and investment is going into technical solutions to help, for now it is still largely human moderators who have the final say.

Moderators are often employed by third-party companies but work on content posted directly to major social networks like Instagram, TikTok, and Facebook.

They are based all over the world. The people I spoke to while producing our series The Moderators for Radio 4 and BBC Sounds were mostly based in East Africa and had all since left the industry.

Their stories were harrowing. Some of what we recorded was too brutal to broadcast. Sometimes my producer Tom Woolfenden and I would finish a recording and just sit in silence.

“If you pick up your phone and then go to TikTok, you will see a lot of activity, dancing, you know, happy things,” says Mojez, a former host from Nairobi who has worked on TikTok content. “But behind the scenes, I personally moderated hundreds of horrific and traumatizing videos.

“I took it upon myself. Let my sanity be destroyed so that general users can continue to carry out their activities on the platform.”

There are currently several legal claims that the work has destroyed the mental health of such presenters. Some of the former workers in East Africa have formed a union.

“Really, the only difference between me logging onto a social media platform and watching a beheading is that someone is sitting in an office somewhere watching the content for me and vetting it so I don’t have to.” says Martha Dark who runs Foxglove, a campaign group supporting the legal action.

Mojez, who previously removed harmful content on TikTok, looks directly into the camera

Mojez, who previously removed harmful content on TikTok, says his mental health has been affected

In 2020, Meta, then known as Facebook, agreed to pay $52 million (£40 million) in compensation to moderators who had developed mental health problems as a result of their work.

The lawsuit was initiated by a former US presenter named Selena Scola. She described presenters as “guardians of souls” because they see so much footage showing the final moments of people’s lives.

The former presenters I spoke to all used the word “trauma” to describe the impact the work had on them. Some had difficulty sleeping and eating.

One described how a baby’s crying had caused a colleague to panic. Another said it was difficult for him to interact with his wife and children because of the child abuse he had witnessed.

I expected them to say that this work was so emotionally and mentally demanding that no human should have to do it – I thought they would fully support the automation of the entire industry, with AI tools being developed further to help with this to do justice to the task.

But they didn’t.

What came through very clearly was the moderators’ immense pride in the role they had played in protecting the world from online harm.

They saw themselves as a vital emergency service. One says he wanted a uniform and a badge, comparing himself to a paramedic or firefighter.

“Not even a second was wasted,” says someone we called David. He asked to remain anonymous, but he had been working on material used to train the viral AI chatbot ChatGPT so that it was programmed not to play terrible material.

“I am proud of the people who made this model what it is today.”

Martha Dark Martha Dark looks into the cameraMartha Dark

Martha Dark advocates for social media moderators

But the very tool David helped train could one day compete with him.

Dave Willner is the former head of trust and security at OpenAI, the creator of ChatGPT. He says his team has developed a rudimentary moderation tool based on chatbot technology that can identify harmful content with about 90% accuracy.

“When I kind of realized, ‘Oh, this is going to work,’ I honestly choked up a little bit,” he says. “[AI tools] Don’t be bored. And they don’t get tired and they don’t get shocked…. they are tireless.”

However, not everyone is convinced that AI is a panacea for the beleaguered moderation sector.

“I think it’s problematic,” says Dr. Paul Reilly, lecturer in media and democracy at the University of Glasgow. “Obviously AI can be a pretty direct, binary way of moderating content.

“It can lead to excessive blocking of free speech and of course it can lack nuances that human moderators could recognize. Human moderation is essential for platforms,” he adds.

“The problem is that there aren’t enough of them and the job is incredibly damaging to those who do it.”

We also touched on the technology companies featured in the series.

A TikTok spokesperson says the company knows that content moderation is not an easy task and is committed to fostering a caring work environment for employees. This includes providing clinical support and creating programs that support facilitators’ wellbeing.

They add that videos are first reviewed by automated technology, which they say removes a large amount of harmful content.

Open AI – the company behind Chat GPT – says it is grateful for the important and sometimes challenging work that human workers do to train AI to recognize such photos and videos. A spokesperson adds that Open AI, along with its partners, enforces policies to protect the well-being of these teams.

And Meta – which owns Instagram and Facebook – requires all the companies it works with to provide 24/7 on-site support from trained professionals. Additionally, moderators can customize their review tools to obscure graphic content.

The Moderators is on BBC Radio 4 from Monday November 11th to Friday November 15th at 1:45pm GMT BBC sounds.

Read more global business stories

You may also like...