November 14, 2024

Tishamarie online

Specialists in technology

‘Tech firms should check images before publication’

Child abuse imageImage copyright
Getty Images

Internet companies must do more to tackle “an explosion” in images of child sex abuse on their platforms, a UK-held inquiry has concluded.

The panel also said the technology companies had “failed to demonstrate” they were fully aware of the number of under-13s using their services and lacked a plan to combat the problem.

It has called for all images to be screened before publication.

And it said more stringent age checks were also required.

‘Reputational damage’

Facebook, Instagram and Snapchat were identified as the most commonly cited apps where grooming was said to take place.

And the industry at large was accused of being “reactive rather than proactive” in response to the issues.

“Action seemed driven by a desire to avoid reputational damage rather than to prioritise protection of children,” the inquiry said.

The report follows a series of public hearings, between January 2018 and May 2019, during which the police said they believed the UK was the world’s third biggest consumer of live-streamed child sex abuse.

‘Evil crime’

Facebook was one of the first to respond.

“[We] have made huge investments in sophisticated solutions,” said its European head of safety, David Miles.

“As this is a global, industry-wide issue, we’ll continue to develop new technologies and work alongside law enforcement and specialist experts in child protection to keep children safe.”

Microsoft also promised to “consider these findings carefully”, while Google said it would keep working with others to “tackle this evil crime”.

Illegal imagery

The report said some steps should be taken before the end of September.

Leading its list is a requirement for screening before images appear online.

The report noted technologies such as Microsoft’s PhotoDNA had made it possible for pictures to be quickly checked against databases of known illegal imagery without humans needing to look at them.

But at present, this filtering process typically happened after the material had already become available for others to see.

Officially banned

Users might be frustrated by a delay in seeing their content go live but, the panel said, it had not been told of any technical reason this process could not happen before publication.

The inquiry also said the UK government should introduce legislation to compel the companies involved to adopt more effective checks to deter under-age users.

Pre-teens were at “particularly acute” risk of being groomed, it said.

The panel recognised many services were officially banned to under-13s.

Child nudity

But it said in many cases, the only test was to require users to fill in a date-of-birth form, which could easily be falsified.

“There must be better means of ensuring compliance,” it said.

The report acknowledged detecting and preventing the live-streaming of abuse was difficult but highlighted a French app as an example to learn from.

It said Yubo used algorithms to detect possible instances of child nudity, which a human moderator would then check to see if action if necessary.

Image copyright
Yubo

Image caption

It was suggested the big social networks could learn from a smaller rival, Yubo

The panel also noted existing anti-abuse technologies did not work when communications were protected by end-to-end encryption, which digitally scrambles communications without giving platform providers a key.

The inquiry highlighted WhatsApp and Apple’s iMessage and FaceTime already used the technique by default and Facebook intended to deploy it more widely soon.

However, it did not say how this should be addressed.

Source Article