Internet Watch Foundation has found a manual on dark web encouraging criminals to use software tools that remove clothing. The manipulated image could then be used against the child to blackmail them into sending more graphic content, the IWF said.
Internet Watch Foundation has found a manual on dark web encouraging criminals to use software tools that remove clothing. The manipulated image could then be used against the child to blackmail them into sending more graphic content, the IWF said.
Cue the AI apologists trying to explain how AI child porn is a safe, victim free outlet for pedophiles to indulge in their mental illness.
AI apologists? AI and pedophilia are very different things.
Yes, people who will defend AI in any arena and reject any criticism or examples of harm that it produces. They do this to try and control any narrative or discussion that may lead to regulation. Classifying AI CP as harmful would necessitate action that restricts AI, which AI apologists are very much against. Lemmy is unfortunately overrun with them.
Restricting AI will only kill the open source scene and make all AI products subscription based. Since we are moving quickly to an AI driven society this would give our whole economy to google and Microsoft.
Some of us understand what’s at stake.
The individuals doing such actions should absolutely be prosecuted, it needs to be illegal to make deep fakes of someone, triply so when it’s used to extort that person.
But if you catch someone drunk driving, you prosecute him for drunk driving, you don’t ban cars.
But obviously, if someone says “think of the children”, you should always mindlessly give up whatever freedoms they are asking you too.
If cars are routinely crashing and sending people through the windshield you require seatbelts.
You’re doing exactly what I’m referring to.
Except you’re not trying to ask for seatbelts. You’re arguing we get rid of the cars.
Ai being the vessel for the problem which is cyber extortion.
You handle the extortion bit by making seatbelts. Not seatbelts that auto buckle. Not cars that don’t start without one. But by providing the safe guards to the people who can then make the decision to wear them and to punish those that put others at risk by their misuse.
You don’t ban alcohol because of alcoholics. You punish those who refuse to use them safely and appropriately and, most of all, those who put others at risk.
That’s freedom. That’s the American way. Not anything else.
No I’m not. You want me to be saying that because it makes it easier for you to make your argument, but that wasn’t what I suggested anywhere.
No I don’t. You want me to think that because it makes it easier to be aggressive towards.
I’ve obviously misunderstood you, so I’m sorry about that. I should’ve led with questions instead of assumptions and that’s on me.
I think any mature adult who’s for AI, knows that some safeguards and changes are necessary- just like they are for any new invention
There are no seatbelts. Its either cars or only public transport.
Can you explain what’s wrong in what I said instead of saying “you are one of those that is against restrictive regulations, therefore are wrong”
We should be very vocal about it, Openai and their friends are. They have lobbyists in Washington trying to convince the government AI is too dangerous for people to have free access to it. They are using the media to dessimate hate and trigger people’s emotional response.
So you’re essentially (and falsely) asserting that there is no way to regulate AI without eliminating it completely. Do you understand how insane and reckless that sounds?
Should AI be able to give instructions on building a bomb as well because to not do so “sTiFlEs iNnOvAtIoN”?
If people are going to train AI they have an obligation to ensure it’s not producing harm and there should be consequences to those who design and train AI in a reckless or harmful way.
Yes, you should be restricted from creating a child porn generator.
I think the question is: should we have designed the internet such as to have made it impossible to find bomb plans on it? And to be honest, I don’t think the internet would be what it is if it were possible to have that level of filtering and censorship. Child porn is reprehensible in any form. To me, it makes more sense to blame the moron with the hammer than to blame the hammer.
What you are asking for is equivalent to stopping people from writing literotica about children using word.
Nobody is advocating for child literotica or defending it, but most understand that it would take draconian measures to stop it. Word would have to be entirely online and everything written would have to pass through a filter to verify it isn’t something illegal.
By it’s very nature, it’s very difficult to remove such things from generative models. Although there is one solution I can think of which would be to take children completely out of models.
The problem is this isn’t a solution that is being proposed, sadly all current possible legislations are meant to do one thing and that is to create and cement a monopoly around AI.
I’m ready to tackle all issues involving AI but the main current issue is a handful of companies trying to rip it out of our hands and playing on people’s emotions to do so. Once that’s done, we can take care of the 0.01 % of users that are generating CP.
Pedophile apologists*
Nobody interested in the development of AI would be interested in defending pedos, unless they’re pedos. That’s reality.
Why lump the two groups together?
In fact, AI is used by these orgs to prevent workers from having to look at these images themselves which is partially why mod/admin/content filter people’s burnout is so high.
Everytime some nasty shit (pedo shit, gore, etc) is posted on tumblr, Facebook, Instagram, etc, those reports go through real people (or did prior to these AI models). Now imagine smaller, upcoming websites like lemmy instances that might not have the funds or don’t know of this AI solution.
AI fixes problems too - the root of the problem is cyber extortion. Whether that means the criminals are photoshopping or using AI. They’re targeting children for Christ sake, besides that being fucked up all by itself, it’s not hard to fool a child. AI or not. How criminals are contacting and blackmailing YOUR CHILDREN is the problem imo
Because it’s very hard to make a “think of the children” argument out of this without doing so.
Hadn’t thought about it that way!
I mean i agree with you they are everywhere even on lemmy .
Use of child porn is never victimless. Not even if it’s fully AI generated.
Who is a victim in that case?
It enables pedophilia.
But who is a victim here
All of the children who end up getting abused because the pedophile escalated their fantasies.
A society that tolerates pedophilia on any level is morally guilty.
That’s a reach.
With that logic you can blame, say, videogames for shootings.