TikTok is rushing to build teams around the world to moderate its streams of viral videos, after increasing concerns that its young users are being exposed to the same sort of toxic content that has plagued YouTube and Facebook.
The Chinese-owned app, which has amassed over 1 billion monthly users in just three years, will also outsource all decision-making about what videos are acceptable to its local teams in the US, Europe and India after running into criticism about whether it is censoring content along Chinese government guidelines.
In the US, where TikTok has topped the download charts at times this year, responsibility for content has been fully localised and there are plans to add more "subject matter experts" in 2020, said Eric Han, its US head of safety.
But the company is severely lagging behind its Silicon Valley rivals, who have had more time to grapple with how to protect young users from disturbing and illegal posts.
Working with third-party analysts, the Financial Times found evidence of violence, hate speech, bullying and sexually explicit content on TikTok, in some cases as part of trending topics with millions of posts. Several ByteDance employees and former employees told the FT that a lack of experience among the policy teams in particular means they are currently ill-equipped to deal with its moderation problems.
"It can be quite a dangerous place for [ young people ]," said Darren Davidson, editor in chief at Storyful, a social media intelligence agency. "Broadly, we are seeing what we saw on the traditional platforms four or five years ago."
Making it up
To date, most of TikTok's content moderation has taken place in Beijing, at the offices of its $75 billion parent company ByteDance.
But oversight of the policies – particularly around political content - has been haphazard, according to several people familiar with the situation. While the company has banned political advertising, recent media reports have highlighted concerns over censorship, denied by TikTok.
Other policies - such as a decision to hide videos from people with disabilities or from lesbian and gay users, which the company argues was designed to prevent bullying - have prompted a public backlash.
Several people with close links to the Beijing office said that bias, errors and poor judgment calls were the result of inexperienced teams.
According to a person familiar with the company, a typical content policymaker is often hired straight after coming back from university abroad, or with one to two years of work experience, and is asked to make editorial decisions about what is appropriate content for countries with vastly different political contexts.
“I feel everyone - Bytedance’s competitors included - is making it up as they go along,” said the person. Another said that a lack of communication and silos within the company – even in the same business division – were thwarting the company’s efforts to make a uniform approach.
Global to local
TikTok has now begun to shift its content policy and moderation practices from Beijing to local teams who are better equipped to understand the cultural nuances of a particular region.
As part of this, it has set up individual “fully autonomous” safety and moderation teams in the US, and is aiming to do the same in Europe.
The Los Angeles-based Mr Han said that TikTok was “growing [its US safety team] rapidly and exponentially, making sure we have the right resources and people in place”. He added that he intended to “carry that forward momentum” into 2020.
He did not comment on the size of TikTok’s trust and safety teams, but said they used a combination of automated systems and human moderators, including contractors, to monitor their systems.
In October, the company said it had hired law firm K&L Gates - including former congressmen Bart Gordon and Jeff Denham, to review and "increase transparency" around its US content moderation policies and help bolster that team.
Dark side of TikTok
Still, several researchers told the Financial Times that they had discovered unsuitable content on TikTok that appeared to go against its policies, while campaigners urged the platform to introduce better safety features.
L1ght, a group that scans social media platforms for content that is harmful to children, found trends that included violence against women, particularly young girls. This included a the hashtag #kidnap - which had 28m results - where boys would stage kidnapping and attacks on their girlfriends.
"Our algorithms discovered very troubling content on TikTok," said Zohar Levkovitz, chief executive of L1ght. "We are mostly worried by the fact that online violent trends could easily become real-life incidents, and set the wrong example for young generations."
Haley Halverson, vice-president of advocacy and outreach at The National Center on Sexual Exploitation, said that her non-profit organisation had received a growing number of calls over the past 18 months from school students and concerned parents complaining about the presence of hyper-sexualised content on the app, as well as instances of adults using it to try to groom minors.
"Within five to 10 minutes on TikTok, you can see they are not enforcing their community guidelines adequately," Ms Halverson said.
Some argued that the design of TikTok - as a video platform where users discover content from strangers, not just that of a network of chosen friends – lends itself to new ways of spreading abuse and evading detection.
There are fewer keywords to monitor, making it more difficult for computers to automatically flag posts. There has also been evidence that some bad actors, including white supremacists, have been piggybacking on to trending topics to spread their posts, according to Storyful’s Mr Davidson.
The duetting feature – where users respond to an existing video by recording themselves alongside it – can also be wielded for bullying or harassment, experts say. A user might hold a gun to the head of the person in the original post, for example.
Other viral “challenges” led young children to mock schoolmates for their appearance or the way they sound, according to an analysis by Storyful.
Meanwhile, TikTok’s powerful recommendations algorithm could push users who inadvertently see disturbing content to see more. “Once someone has been exposed to one video, the likelihood they are shown more of that content is extremely high - it’s how the algorithm works,” Mr Davidson said.
Ms Halverson accused the platform of “prioritising profit over safety”. In particular, she said the process for reporting issues was cumbersome and inefficient and that, unlike on other big social media platforms, TikTok’s parental control tools turned off automatically and needed manually resetting after a month.
“With TikTok what makes it particularly concerning is their lack of appropriate safety features,” she said. “While [harmful content] is a problem across the social media ecosystem, TikTok is socially irresponsible in the way they have addressed the problem.”
Mr Han said the company had a reporting process in line with peers. “In our current iteration of TikTok, we are a little over a year old,” he said. “We are doing what other platforms have been doing at this stage of their trajectory. There is a full commitment to user safety.” – Copyright The Financial Times Limited 2019