- cross-posted to:
- technology@lemmy.zip
- cross-posted to:
- technology@lemmy.zip
Nucleo’s investigation identified accounts with thousands of followers with illegal behavior that Meta’s security systems were unable to identify; after contact, the company acknowledged the problem and removed the accounts
When I saw this, 2 questions came to mind: How come that this isn’t immediately reported? Why would anyone upload illegal material to a platform that tracks as thoroughly as Meta’s do?
The answer is:
The 1 question that came to mind upon reading this is: What?
I’m a little confused as to how it can still be AI CSAM if the bodies are voluptuous and the breasts are ample. Childlike faces have been the bread and butter of face filters for years.
Which parts specifically have to be childlike for it to be AI CSAM? This is why we need some laws ASAP.
Things that you want to understand but sure as fuck ain’t gonna Google.
My guess is that the algorithm is really good at predicting who will be likely to follow that kind of content, rather than report it. Basically, it flies under the radar purely because the only people who see it are the ones who have a vested interest in it flying under the radar.
Look again. The explanation is that these images simply don’t look like any kind of CSAM. The whole story looks like some sort of scam to me.