I think that’s a bit of a stretch. If it was being marketed as “make your fantasy, no matter how illegal it is,” then yeah. But just because I use a tool someone else made doesn’t mean they should be held liable.
And if I prompted AI for something inappropriate, and it gave me a relevant image, then that means the AI had inappropriate material in it’s training data.
No, you keep repeating this but it remains untrue no matter how many times you say it. An image generator is able to create novel images that are not directly taken from its training data. That’s the whole point of image AIs.
An image generator is able to create novel images that are not directly taken from its training data. That’s the whole point of image AIs.
I just want to clarity that you’ve bought the silicon valley hype for AI but that is very much not the truth. It can create nothing novel - it can merely combine concepts and themes and styles in an incredibly complex manner… but it can never create anything novel.
What it’s able and intended to do is besides the point, if it’s also capable of generating inappropriate material.
Let me spell it more clearly. AI wouldn’t know what a pussy looked like if it was never exposed to that sort of data set. It wouldn’t know other inappropriate things if it wasn’t exposed to that data set either.
Do you see where I’m going with this? AI only knows what people allow it to learn…
You realize that there are perfectly legal photographs of female genitals out there? I’ve heard it’s actually a rather popular photography subject on the Internet.
Do you see where I’m going with this? AI only knows what people allow it to learn…
Yes, but the point here is that the AI doesn’t need to learn from any actually illegal images. You can train it on perfectly legal images of adults in pornographic situations, and also perfectly legal images of children in non-pornographic situations, and then when you ask it to generate child porn it has all the concepts it needs to generate novel images of child porn for you. The fact that it’s capable of that does not in any way imply that the trainers fed it child porn in the training set, or had any intention of it being used in that specific way.
As others have analogized in this thread, if you murder someone with a hammer that doesn’t make the people who manufactured the hammer guilty of anything. Hammers are perfectly legal. It’s how you used it that is illegal.
Yes. You’re saying that the AI trainers must have had CSAM in their training data in order to produce an AI that is able to generate CSAM. That’s simply not the case.
You also implied earlier on that these AIs “act or respond on their own”, which is also not true. They only generate images when prompted to by a user.
The fact that an AI is able to generate inappropriate material just means it’s a versatile tool.
I think that’s a bit of a stretch. If it was being marketed as “make your fantasy, no matter how illegal it is,” then yeah. But just because I use a tool someone else made doesn’t mean they should be held liable.
Check my other comments. My thought was compared to a hammer.
Hammers aren’t trained to act or respond on their own from millions of user inputs.
Image AIs also don’t act or respond on their own. You have to prompt them.
And if I prompted AI for something inappropriate, and it gave me a relevant image, then that means the AI had inappropriate material in it’s training data.
No, you keep repeating this but it remains untrue no matter how many times you say it. An image generator is able to create novel images that are not directly taken from its training data. That’s the whole point of image AIs.
I just want to clarity that you’ve bought the silicon valley hype for AI but that is very much not the truth. It can create nothing novel - it can merely combine concepts and themes and styles in an incredibly complex manner… but it can never create anything novel.
What it’s able and intended to do is besides the point, if it’s also capable of generating inappropriate material.
Let me spell it more clearly. AI wouldn’t know what a pussy looked like if it was never exposed to that sort of data set. It wouldn’t know other inappropriate things if it wasn’t exposed to that data set either.
Do you see where I’m going with this? AI only knows what people allow it to learn…
You realize that there are perfectly legal photographs of female genitals out there? I’ve heard it’s actually a rather popular photography subject on the Internet.
Yes, but the point here is that the AI doesn’t need to learn from any actually illegal images. You can train it on perfectly legal images of adults in pornographic situations, and also perfectly legal images of children in non-pornographic situations, and then when you ask it to generate child porn it has all the concepts it needs to generate novel images of child porn for you. The fact that it’s capable of that does not in any way imply that the trainers fed it child porn in the training set, or had any intention of it being used in that specific way.
As others have analogized in this thread, if you murder someone with a hammer that doesn’t make the people who manufactured the hammer guilty of anything. Hammers are perfectly legal. It’s how you used it that is illegal.
Yes, I get all that, duh. Did you read the original post title? CSAM?
I thought you could catch a clue when I said inappropriate.
Yes. You’re saying that the AI trainers must have had CSAM in their training data in order to produce an AI that is able to generate CSAM. That’s simply not the case.
You also implied earlier on that these AIs “act or respond on their own”, which is also not true. They only generate images when prompted to by a user.
The fact that an AI is able to generate inappropriate material just means it’s a versatile tool.
I learned how to write by reading. The AI did the same, more or less, no?
The AI didn’t learn to draw or generate photos from blind words though…
Oh, it learned from art? Like how human artists learn?
AI hasn’t exactly kicked out a Picasso with a naked young girl missing an ear yet has it?
I sure hope not!
But if it can, then that seriously indicates it must have some bad training data in the system…
I won’t be testing these hypotheses.
It in fact does have bad training data! https://cyber.fsi.stanford.edu/news/investigation-finds-ai-image-generation-models-trained-child-abuse
Thank you for posting a relevant link. It’s disappointing that such data is any part of any public AI systems… ☹️