Hi
I am a computer science student and am just starting my masters thesis. My focus will be on content moderation (algorithms) and therefore I am currently exploring how some social media applications moderate content.
If I understand the docs correctly, content moderation on mastodon is all manual labor? I haven’t read anything about automatic detection of Child Sexual Abuse Material (CSAM) for example which is a thing that most centralised platforms seem to do.
Another question which kind of goes in the same direction is reposting of already moderated content. For example a racist meme that was posted before. Are there any measures in place to detect this?
Thank you for your help!
Lemmy (a Fediverse alternative to Reddit) does have a community driven tool Fedi Safety for detecting and deleting CSAM. There are some instances that use it but I don’t have any statistics on that.