Connect with us

Technology

TikTok calls in outside help with content moderation in Europe

Published

on

TikTok is bringing in external experts in Europe in fields such as child safety, young people’s mental health and extremism to form a Safety Advisory Council to help it with content moderation in the region.

The move, announced today, follows an emergency intervention by Italy’s data protection authority in January — which ordered TikTok to block users it cannot age verify after the death of a girl who was reported by local media to have died of asphyxiation as a result of participating in a black out challenge on the video sharing platform.

The social media platform has also been targeted by a series of coordinated complaints by EU consumer protection agencies, which put out two reports last month detailing a number of alleged breaches of the bloc’s consumer protection and privacy rules — including child safety-specific concerns.

“We are always reviewing our existing features and policies, and innovating to take bold new measures to prioritise safety,” TikTok writes today, putting a positive spin on needing to improve safety on its platform in the region.

“The Council will bring together leaders from academia and civil society from all around Europe. Each member brings a different, fresh perspective on the challenges we face and members will provide subject matter expertise as they advise on our content moderation policies and practices. Not only will they support us in developing forward-looking policies that address the challenges we face today, they will also help us to identify emerging issues that affect TikTok and our community in the future.”

It’s not the first such advisory body TikTok has launched. A year ago it announced a US Safety Advisory Council, after coming under scrutiny from US lawmakers concerned about the spread of election disinformation and wider data security issues, including accusations the Chinese-owned app was engaging in censorship at the behest of the Chinese government.

But the initial appointees to TikTok’s European content moderation advisory body suggest its regional focus is more firmly on child safety/young people’s mental health and extremism and hate speech, reflecting some of the main areas where it’s come under the most scrutiny from European lawmakers, regulators and civil society so far.

TikTok has appointed nine individuals to its European Council (listed here) — initially bringing in external expertise in anti-bullying, youth mental health and digital parenting; online child sexual exploitation/abuse; extremism and deradicalization; anti-bias/discrimination and hate crimes — a cohort it says it will expand as it adds more members to the body (“from more countries and different areas of expertise to support us in the future”).

TikTok is also likely to have an eye on new pan-EU regulation that’s coming down the pipe for platforms operating in the region.

EU lawmakers recently put forward a legislative proposal that aims to dial up accountability for digital service providers over the content they push and monetize. The Digital Services Act, which is currently in draft, going through the bloc’s co-legislative process, will regulate how a wide range of platforms must act to remove explicitly illegal content (such as hate speech and child sexual exploitation).

The Commission’s DSA proposal avoided setting specific rules for platforms to tackle a broader array of harms — such as issues like youth mental health — which, by contrast, the UK is proposing to address in its plan to regulate social media (aka the Online Safety bill). However the planned legislation is intended to drive accountability around digital services in a variety of ways.

For example, it contains provisions that would require larger platforms — a category TikTok would most likely fall into — to provide data to external researchers so they can study the societal impacts of services. It’s not hard to imagine that provision leading to some head-turning (independent) research into the mental health impacts of attention-grabbing services. So the prospect is platforms’ own data could end up translating into negative PR for their services — i.e. if they’re shown to be failing to create a safe environment for users.

Ahead of that oversight regime coming in, platforms have increased incentive to up their outreach to civil society in Europe so they’re in a better position to skate to where the puck is headed.

 

Source link