No, that’s why content moderation employing a braindead AI and half-baked algorithms is a bad idea. If (and that’s a GIGANTIC if) it was possible to have real humans checking stuff without destroying privacy, aggressive content moderation would be fine. Of course, I’m ignoring stuff like censorship, thoughtpolicing and government-backed oppression as that’s a whole other factory of cans of worms.
Not necessarily AI, but needs to be handled automatically by software–its far too much for human moderation to be the norm unless you assume masses of completely unpaid labor.
No, that’s why content moderation employing a braindead AI and half-baked algorithms is a bad idea. If (and that’s a GIGANTIC if) it was possible to have real humans checking stuff without destroying privacy, aggressive content moderation would be fine. Of course, I’m ignoring stuff like censorship, thoughtpolicing and government-backed oppression as that’s a whole other factory of cans of worms.
Content moderation automatically implies AI. There is too much content online to moderate by humans.
Not necessarily AI, but needs to be handled automatically by software–its far too much for human moderation to be the norm unless you assume masses of completely unpaid labor.
yes, that’s what I meant.