WhatsApp fails to curb sharing of child sex abuse videos

Videos and pictures being openly shared with some groups including up to 256 people, researchers find

A spokesman for WhatsApp said it “has a zero-tolerance policy around child sexual abuse”. Photograph: Dado Ruvic/Reuters
A spokesman for WhatsApp said it “has a zero-tolerance policy around child sexual abuse”. Photograph: Dado Ruvic/Reuters

Videos and pictures of children being subjected to sexual abuse are being openly shared on the Facebook’s WhatsApp on a vast scale, with the encrypted messaging service failing to curb the problem despite banning thousands of accounts every day.

The revelations emerged after Israeli researchers warned Facebook, the owner of WhatsApp, in September that it was easy to find and join group chats where - in some instances - up to 256 people were sharing sexual images and videos of children.

These groups were monitored and documented for months by two charities in Israel dedicated to online safety, Netivei Rishet and Screensaverz. Their purpose was often obvious from names such as "cp" and from explicit photographs used on their profile photos.

Such identifiers were not encrypted, and were publicly viewable so as to advertise the illegal content, yet systems that WhatsApp said it has in place failed to detect them.

READ SOME MORE

A review of the groups by the Financial Times quickly found several that were still extremely active this week, long after WhatsApp was warned about the problem by the researchers.

"It is a disaster: this sort of material was once mostly found on the darknet, but now it's on WhatsApp," said Netivei Reshet's Yona Pressburger, referring to the parts of the internet that are purposefully hidden from normal search engines and that criminals use to cloak their activities.

A spokesman for WhatsApp said it “has a zero-tolerance policy around child sexual abuse” and “actively bans accounts suspected of sharing this vile content”.

The messaging app also said it actively scanned WhatsApp group names and profile photos in an attempt to identify people sharing such illegal material. Such techniques led WhatsApp to ban approximately 130,000 accounts in the last 10 days, out of its user base of about 1.5 billion.

Political pressure

But the NGOs’ findings illustrate a bigger problem: WhatsApp’s end-to-end encryption, designed to protect privacy, means that the company cannot see the content of the messages users send, making it harder to monitor when child abuse imagery is shared. It can also hamper law enforcement from uncovering illegal activity.

With the users of encrypted messaging services, such as WhatsApp, Apple’s iMessage, Telegram and Signal now numbering in the billions, political pressure has mounted in the US and UK for companies to grant access to criminal investigators.

WhatsApp, which Facebook bought in 2014 for $22 billion, finished rolling out end-to-end encryption for messages in 2016.

As a result, even if Facebook wanted to, it could not apply the same tools it uses to remove illegal images and text from its main social networking site and the photo-sharing site Instagram, which it also owns. On those services, software automatically searches for keywords and images of nudity, pornography and violence. Facebook also employs 20,000 content moderators, often low-paid contractors, who review posts manually.

By contrast, WhatsApp has only 300 employees in total, and far fewer resources dedicated to monitoring for illegal activity.

Even so, Hany Farid, a professor of computer science at Berkeley who developed the PhotoDNA system used by more than 150 companies to detect child abuse imagery online, said Facebook could do more to get rid of illegal content on WhatsApp.

“Crimes against children getting worse and worse, the kids are getting younger and younger and the acts are getting more violent. It’s all being fuelled by these platforms,” he said.

“The problem is deep rooted in these companies. It’s the move fast and break things model.”

Law enforcement officials have noted a change in how paedophiles are using technology to mask their activities.

"We are seeing an uptick in the use of encrypted messaging apps on the offender side, and it poses significant issues for law enforcement in terms of traceability and visibility," said Cathal Delaney, who leads the team combating online child sexual abuse at Europol.

Operation tantalio

WhatsApp was at the centre of a 2017 child abuse investigation led by Spanish police dubbed Operation Tantalio that led to the arrest of 38 people in 15 countries. The investigation began in 2016 when Spanish investigators identified dozens of WhatsApp groups that were circulating child sexual exploitation materials. They then traced the mobile phone numbers used to identify individuals involved, as well as those suspected of producing the material.

As successful as Operation Tantalio appeared, law enforcement globally has struggled to stem the tide of child sexual abuse materials online.

The Israeli NGOs stumbled on to WhatsApp’s problem in August after a young man called their hotline to report being exposed to pornography on the messaging service.

On the Google Play store for Android smartphones, there are dozens of free apps that collect links to WhatsApp groups. Via such apps, the NGOs found extensive child abuse material and began to record what they saw.

“We went to great lengths to document as broadly as possible to prove this is not some minor activity,” wrote the NGOs in a report. “The total time spent on checking this was approximately 20 days, from several different devices. During this period we monitored continuously 10 active groups, and dozens more for short intervals.”

The NGOs emailed Jordana Cutler, Facebook's head of policy in Israel, in early September to warn the company about their findings. They asked to meet with Facebook four times, according to emails seen by the Financial Times, but Ms Cutler did not respond to the requests.

Instead, she sent a reply asking for the evidence - which totalled several gigabytes of data and more than 800 videos and images - so it could be sent to teams outside the country for investigation. She also asked the NGOs if they had gone to the police, and suggested that working with the police would be “the most effective way”. Ms Cutler did not pass on the NGOs’ warnings to WhatsApp.

The NGOs declined to send over links because they wanted to meet with Facebook directly to discuss what they saw as a broad and persistent problem. They also took their evidence to the Israeli police and filed a complaint, and contacted a member of the Israeli parliament who oversees a committee on children’s safety.

Frustrated by what they saw as inaction on Facebook’s part, the NGOs eventually compiled a 14-page report of their findings and brought it to the Financial Times via an intermediary, Zohar Levkovitz, a well-known Israeli technology executive.

Mr Levkovitz recently founded a start-up called AntiToxin Technologies, which is working to develop tools to protect children online. There is no financial relationship between his company and the NGOs, although Mr Levkovitz's company does have an interest in calling attention to safety issues raised by children's use of technology.

“Ensuring child safety online requires immediate action,” said the entrepreneur.

Correct procedure

For its part, Facebook said Ms Cutler had followed correct procedure in dealing with the NGOs’ information. “We offered to work together with police in Israel to launch an investigation to stop this abuse,” a spokeswoman said.

The Financial Times was able to verify and corroborate the NGOs’ main finding that it is easy to find WhatsApp groups that are being used to exchange child abuse materials.

For example, a group chat titled "Kids boy gay" was active earlier this week, with 256 people using phone numbers from countries including India, Pakistan, Algeria, and the US. In it people were sharing photos and videos and making specific requests for "cp videos".

WhatsApp shut down the group after being contacted by the FT and banned all of its participants.

Asked why Google allowed apps aggregating links promoting sexual images of children to remain on the Google Play store, a Google spokesperson said: “If we identify an app promoting this kind of material that our systems haven’t already blocked, we report it to the relevant authorities and remove it from our platform.”

Technology experts suggested that WhatsApp could disable encryption on groups above a certain size, so as to monitor the content being shared there.

Professor Farid said the company could also implement a different weaker kind of encryption to allow it to search WhatsApp chats for known child abuse images catalogued in the PhotoDNA system, which he developed with Microsoft a decade ago. The system scans every image uploaded to a site for a signature to see if it matches with any in a database of millions of known child abuse images managed by the National Center for Missing and Exploited Children in the US.

If WhatsApp used partially homomorphic encryption, it could scan images as they are uploaded and see if they matched. Professor Farid said it is “incredibly frustrating” that the technology companies had not developed ways to improve PhotoDNA or change it for encrypted environments or the dark web.

“You would think we could all just get behind this,” he said. “The companies have done essentially nothing.” – Copyright The Financial Times Limited 2018