• Darkassassin07@lemmy.ca
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    3
    ·
    6 months ago

    …no

    That’d be like outlawing hammers because someone figured out they make a great murder weapon.

    Just because you can use a tool for crime, doesn’t mean that tool was designed/intended for crime.

    • greentreerainfire@kbin.social
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      6 months ago

      That’d be like outlawing hammers because someone figured out they make a great murder weapon.

      Just because you can use a tool for crime, doesn’t mean that tool was designed/intended for crime.

      Not exactly. This would be more akin to a company that will 3D printer metal parts and assemble them for you. You use this service and have them create and assemble a gun for you. Then you use that weapon in a violent crime. Should the company have known better that you were having them create an illegal weapon on your behalf?

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        12
        arrow-down
        1
        ·
        6 months ago

        The person who was charged was using Stable Diffusion to generate the images on their own computer, entirely with their own resources. So it’s akin to a company that sells 3D printers selling a printer to someone, who then uses it to build a gun.

    • Crismus@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      2
      ·
      6 months ago

      Sadly that’s what most of the gun laws are designed about. Book banning and anti-abortion both are limiting tools because of what a small minority choose to do with the tool.

      AI image generation shouldn’t be considered in obscenity laws. His distribution or pornography to minor should be the issue, because not everyone stuck with that disease should be deprived tools that can be used to keep them away from hurting others.

      Using AI images to increase charges should be wrong. A pedophile contacting and distributing pornography to children should be all that it takes to charge a person. This will just setup new precedent that is beyond the scope of the judiciary.

    • xmunk@sh.itjust.works
      link
      fedilink
      arrow-up
      2
      arrow-down
      7
      ·
      6 months ago

      It would be more like outlawing ivory grand pianos because they require dead elephants to make - the AI models under question here were trained on abuse.

      • Darkassassin07@lemmy.ca
        link
        fedilink
        English
        arrow-up
        6
        ·
        edit-2
        6 months ago

        A person (the arrested software engineer from the article) acquired a tool (a copy of Stable Diffusion, available on github) and used it to commit crime (trained it to generate CSAM + used it to generate CSAM).

        That has nothing to do with the developer of the AI, and everything to do with the person using it. (hence the arrest…)

        I stand by my analogy.

        • xmunk@sh.itjust.works
          link
          fedilink
          arrow-up
          1
          arrow-down
          4
          ·
          6 months ago

          Unfortunately the developer trained it on some CSAM which I think means they’re not free of guilt - we really need to rebuild these models from the ground up to be free of that taint.

          • Darkassassin07@lemmy.ca
            link
            fedilink
            English
            arrow-up
            5
            ·
            6 months ago

            Reading that article:

            Given it’s public dataset not owned or maintained by the developers of Stable Diffusion; I wouldn’t consider that their fault either.

            I think it’s reasonable to expect a dataset like that should have had screening measures to prevent that kind of data being imported in the first place. It shouldn’t be on users (here meaning the devs of Stable Diffusion) of that data to ensure there’s no illegal content within the billions of images in a public dataset.

            That’s a different story now that users have been informed of the content within this particular data, but I don’t think it should have been assumed to be their responsibility from the beginning.

      • wandermind@sopuli.xyz
        link
        fedilink
        arrow-up
        4
        ·
        6 months ago

        Sounds to me it would be more like outlawing grand pianos because of all of the dead elephants - while some people are claiming that it is possible to make a grand piano without killing elephants.

          • FaceDeer@fedia.io
            link
            fedilink
            arrow-up
            7
            arrow-down
            1
            ·
            6 months ago

            3,226 suspected images out of 5.8 billion. About 0.00006%. And probably mislabeled to boot, or it would have been caught earlier. I doubt it had any significant impact on the model’s capabilities.

          • wandermind@sopuli.xyz
            link
            fedilink
            arrow-up
            1
            ·
            6 months ago

            I know. So to confirm, you’re saying that you’re okay with AI generated CSAM as long as the training data for the model didn’t include any CSAM?

            • xmunk@sh.itjust.works
              link
              fedilink
              arrow-up
              1
              arrow-down
              1
              ·
              6 months ago

              No, I’m not - I still have ethical objections and I don’t believe CSAM could be generated without some CSAM in the training set. I think it’s generally problematic to sexually fantasize about underage persons though I know that’s an extremely unpopular opinion here.

              • wandermind@sopuli.xyz
                link
                fedilink
                arrow-up
                1
                arrow-down
                1
                ·
                6 months ago

                So why are you posting all over this thread about how CSAM was included in the training set if that is in your opinion ultimately irrelevant with regards to the topic of the post and discussion, the morality of using AI to generate CSAM?

                • xmunk@sh.itjust.works
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  6 months ago

                  Because all over this thread are claims that AI CSAM doesn’t need actual CSAM to generate. We currently don’t have AI CSAM that is taint free and it’s unlikely we ever will due to how generative AI works.

                  • wandermind@sopuli.xyz
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    6 months ago

                    So at best we don’t know whether or not AI CSAM without CSAM training data is possible. “This AI used CSAM training data” is not an answer to that question. It is even less of an answer to the question “Should AI generated CSAM be illegal?” Just like “elephants get killed for their ivory” is not an answer to “should pianos be illegal?”

                    If your argument is that yes, all AI CSAM should be illegal whether or not the training used real CSAM, then argue that point. Whether or not any specific AI used CSAM to train is an irrelevant non sequitur. A lot of what you’re doing now is replying to “pencils should not be illegal just because some people write bad stuff” with the equivalent of “this one guy did some bad stuff before writing it down”. That is completely unrelated to the argument being made.

    • over_clox@lemmy.world
      link
      fedilink
      arrow-up
      4
      arrow-down
      10
      ·
      6 months ago

      That’s not the point. You don’t train a hammer from millions of user inputs.

      You gotta ask, if the AI can produce inappropriate material, then where did the developers get the training data, and what exactly did they train those AI models for?

      • Darkassassin07@lemmy.ca
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        6 months ago

        Do… Do you really think the creators/developers of Stable Diffusion (the AI art tool in question here) trained it on CSAM before distributing it to the public?

        Or are you arguing that we should be allowed to do what’s been done in the article? (arrest and charge the individual responsible for training their copy of an AI model to generate CSAM)

        One, AI image generators can and will spit out content vastly different than anything in the training dataset (this ofc can be influenced greatly by user input). This can be fed back into the training data to push the model towards the desired outcome. Examples of the desired outcome are not required at all. (IE you don’t have to feed it CSAM to get CSAM, you just have to consistently push it more and more towards that goal)

        Two, anyone can host an AI model; it’s not reserved for big corporations and their server farms. You can host your own copy and train it however you’d like on whatever material you’ve got. (that’s literally how Stable Diffusion is used) This kind of explicit material is being created by individuals using AI software they’ve downloaded/purchased/stolen and then trained themselves. They aren’t buying a CSAM generator ready to use off the open market… (nor are they getting this material from publicly operating AI models)

        They are acquiring a tool and moulding it into a weapon of their own volition.

        Some tools you can just use immediately, others have a setup process first. AI is just a tool, like a hammer. It can be used appropriately, or not. The developer isn’t responsible for how you decide to use it.