Skip to main content


Child sexual abuse is a real and growing danger.
📈 More than 2 million children are being affected every year.

We are proposing new rules to prevent and combat child sexual abuse online. ⬇️

#
in reply to Gilbert Busana

Tech companies have the skills and technology to detect abuse, and they should be responsible for reporting it.

Our new proposal sets obligations for companies to detect and report the abuse of children, with strong safeguards guaranteeing privacy of all.
#
in reply to Gilbert Busana

I’m not sure that’s strictly true. Detecting abuse is INCREDIBLY difficult to do reliably. If you depend on automated “AI” systems perpetrators will learn to get around them. If you use human moderators there is too much content to manually review and the mental health tole on moderators is devastating.

More needs to be done, but we can’t flippantly make directives & then make it tech companies problem to implement. You must with *with* them.
in reply to Gilbert Busana

depending on automated “AI” works well, as learned by the Netherlands: https://www.politico.eu/article/dutch-scandal-serves-as-a-warning-for-europe-over-risks-of-using-algorithms/ - now, imagine how much more awesome it will work for tougher crimes…
in reply to Gilbert Busana

I wonder if the @EU_Commission knows that # companies can plant undetectable # in these # that let them control any output without any observer being able to detect the manipulation.

Paper here: https://arxiv.org/abs/2204.06974

@wiredfire
in reply to Gilbert Busana

A new EU Centre to # online will help EU countries, companies and local authorities to:
🔹 Establish robust prevention measures
🔹 Ensure that offenders are brought to justice
🔹 Support victims

Learn more⮕ http://europa.eu/!4CDyNK
#
in reply to Gilbert Busana

Crimes don't occur "online", crimes are perpetrated by real offenders.

You can increase staffing and budget of specific police units and general prevention if you really think there's a problem, but policing by "AI" is a sure way to automatic false accusations, with fatal consequences you will be responsible for.

But I assume you've been told that a thousand times already, and have developed a thick skin instead of a conscience. Clowns
in reply to Gilbert Busana

Planting Undetectable # in # Models

https://arxiv.org/abs/2204.06974

@EU_Commission

Shamar reshared this.

in reply to Shamar

Great paper, I'm currently reading it!

It has even more far-reaching consequences than the inevitable accidental false positives that will *already* destroy the "AI detection" dreams 😃 It basically destroys the whole ML models as a service industry.

Or it would, if marketing didn't trump logic every single time ...
in reply to Gilbert Busana

Except they don't because their algorithms rely on known content that is spread by abusers, but it won't detect any new content that might even be being shared through other channels to begin with.

Please cease this.

The only thing those algorithms are good for, is to open backdoors for third parties to spy on our chat logs. Pretty much the European version of pegasus, but legal and implemented as a feature on all IM software.

The best way to prevent child abuse is good parenting, instead of parents giving phones with internet access to 5yo children who might become prey to predators on the internet.
in reply to Gilbert Busana

Not just to spy, but to create false accusation against any #, # or annoying # out there.

Planting Undetectable # in # Models

https://arxiv.org/abs/2204.06974

And if you think this is paranoic, read something about #.

@EU_Commission
in reply to Gilbert Busana

it's not tech companies that should make the police, and EU should better invest in it's democratic police system rather than privatizing it like this.

surveillance state is not the solution, and you can't protect privacy while in the same time watching everybody.
# is wrong !

This website uses cookies to recognize revisiting and logged in users. You accept the usage of these cookies by continue browsing this website.