• verassol@lemmy.ml
    link
    fedilink
    arrow-up
    43
    ·
    8 months ago

    StackOverflow: *grabs money on monetizing massive amounts of user-contributed content without consulting or compensating the users in any way*

    Users: *try to delete it all to prevent it*

    StackOverflow: *your contributions belong to the community, you can’t do that*

    Pretty fucked-up laws. A lot of lawsuits going on right now against AI companies for similar issues. In this case, StackOverflow is entitled to be compensated for its partnership, and because the answers are all CC BY-SA 3.0, no one can complain. Now, that SA? Whatever.

    • 9point6@lemmy.world
      link
      fedilink
      arrow-up
      15
      ·
      8 months ago

      That SA part needs to be tested in court against the AI models themselves

      A lot of this shittiness would probably go away if there was a risk that ingesting certain content would mean you need to release the actual model to the public.

      • verassol@lemmy.ml
        link
        fedilink
        arrow-up
        4
        ·
        edit-2
        8 months ago

        Yeah, their assumption though is you don’t? Neither attribution nor sharealike, not even full-on all-rights-reserved copyright is being respected. Anything public goes and if questions are asked it’s “fair use”. If the user retains CC BY-SA over their content, why is giving a bunch of money to StackOverflow entitling OpenAI to use it all under whatever terms they settled on? Boggles me.

        Now, say, Reddit Terms of Service state clearly that by submitting content you are giving them the right to “a worldwide, royalty-free, perpetual, irrevocable, non-exclusive, transferable, and sublicensable license to use, copy, modify, adapt, prepare derivative works of, distribute, store, perform, and display Your Content and any name, username, voice, or likeness (…) in all media formats and channels now known or later developed anywhere in the world.” Speaks volumes on why alternatives (like Lemmy) to these platforms matter.

        • Skull giver@popplesburger.hilciferous.nl
          link
          fedilink
          arrow-up
          4
          arrow-down
          1
          ·
          8 months ago

          The funny thing about Lemmy is that the entire Fediverse is basically running a massive copyright violation ring with current copyright law. The license bit every web company has in their terms exists because Facebook wouldn’t have the right to show your holiday pictures to your grandma otherwise. The pictures are your property, and just because you uploaded them doesn’t mean Facebook has the right to redistribute them. Cropping off the top and bottom to fit it into the timeline? That’s a derivative work, they’d need to ask permission or negotiate a license to show that!

          The Fediverse runs without any such clauses and just presumes nobody cares about copyright. Which they don’t, because the whole thing is based on forwarding all data to everyone.

          Nobody is going to sue a Lemmy server for sending their comment to someone else, because there’s no money behind any of the servers. Companies like Facebook need to get their shit together, though, because they have large pools of investor money that any shithead with a good lawyer can try to claim, and that’s why they have legal disclaimers.

          • verassol@lemmy.ml
            link
            fedilink
            arrow-up
            3
            ·
            8 months ago

            That’s interesting. I was looking up “Lemmy Terms of Service” for comparison after getting that quote from the Reddit ToS and could not find anything for Lemmy.ml. Now after you mentioned it, looking on my Mastodon instance, nothing either, just a privacy policy. That is indeed kinda weird. Some instances do have their own ToS though. At least something stating a sublicense for distribution should be there for protection of people running instances in locations where it’s relevant.

            • Skull giver@popplesburger.hilciferous.nl
              link
              fedilink
              English
              arrow-up
              3
              ·
              8 months ago

              The thing with many of these services is that they’re not run by companies with a legal presence, just by some guy(s) who do it for fun. For many laws, personal projects are considered differently compared to business/organisational endeavours.

              It’s the same thing with personal blogs lacking a privacy policy: the probability of the thing becoming an actual problem in the real world is so abysmally low that nobody bothers, and that’s probably okay.

              During the first wave of some troll uploading child abuse to various Fediverse servers (mostly Lemmy), a lot of server operators got a rough wake-up call, because suddenly they had content on their servers that could land them in prison. There has been an effort to combat this abuse for larger servers, but I don’t think most Lemmy servers run on the Nvidia hardware that’s strong enough to support the live CSAM detection code that was developed.

          • hedgehog@ttrpg.network
            link
            fedilink
            arrow-up
            2
            ·
            8 months ago

            The funny thing about Lemmy is that the entire Fediverse is basically running a massive copyright violation ring with current copyright law.

            Is it, though?

            When someone posts a comment to Lemmy, they do so willingly, with the intent for it to be posted and federated. If they change their mind, they can delete it. If they delete it and it remains up somewhere, they can submit a DMCA request; likewise if someone else posts their copyrighted content.

            Copyright infringement is the use of works protected by copyright without permission for their use. When you submit a post or a comment, your permission to display it and for it to be federated is implied, because that is how Lemmy works. A license also conveys permission, but that’s not the only way permission can be conveyed.

            • Skull giver@popplesburger.hilciferous.nl
              link
              fedilink
              English
              arrow-up
              2
              ·
              8 months ago

              The idea that someone does this willingly implies that the user knows the implications of their choice, which most of the Fediverse doesn’t seem to do (see: people asking questions like “how do I delete comments on a server I’ve been defederated from”, or surprised after finding out that their likes/boosts are inherently public).

              If the implied license was enough, Facebook and all the other companies wouldn’t put these disclaimers in their terms of service. This isn’t true in every jurisdiction, but it does apply to many important ones.

              I agree that international copyright law should work like you imply, but on the other hand, this is exactly why Creative Commons was created: stuff posted on the internet can be downloaded just fine, but rehosting it is not allowed by default.

              This is also why I appreciate the people who put those Creative Commons licenses on their comments; they’re effectively useless against AI, which is what they seem to be trying to combat, but they do provide rights that would otherwise be unavailable.

              Just like with privacy laws and data hosting laws, I don’t think the fediverse cares. I think the fediverse is full of a sort of wilful ignorance about internet law, mostly because the Fediverse is a just a bunch of enthusiastic nerds. No Fediverse server (except for Threads, maybe) has a Data Protection Officer despite sites like lemmy.world legally requiring one if they’d cared about the law, very little Fediverse software seems to provide DMCA links by default, and I don’t think any server is complying with the Chinese, Russian, and European “only store citizen’s data in locally hosted servers” laws at all.

              • hedgehog@ttrpg.network
                link
                fedilink
                arrow-up
                2
                ·
                8 months ago

                The idea that someone does this willingly implies that the user knows the implications of their choice, which most of the Fediverse doesn’t seem to do

                The terms of service for lemmy.world, which you must agree to upon sign-up, make reference to federating. If you don’t know what that means, it’s your responsibility to look it up and understand it. I assume other instances have similar sign-up processes. The source code to Lemmy is also available, meaning that a full understanding is available to anyone willing to take the time to read through the code, unlike with most social media companies.

                What sorts of implications of the choice to post to Lemmy do you think that people don’t understand, that people who post to Facebook do understand?

                If the implied license was enough, Facebook and all the other companies wouldn’t put these disclaimers in their terms of service.

                It’s not an implied license. It’s implied permission. And if you post content to a website that’s hosting and displaying such content, it’s obvious what’s about to happen with it. Please try telling a judge that you didn’t understand what you were doing, sued without first trying to delete or file a DMCA notice, and see if that judge sides with you.

                Many companies have lengthy terms of service with a ton of CYA legalese that does nothing. Even so, an explicit license to your content in the terms of service does do something - but that doesn’t mean that you’re infringing copyright without it. If my artist friend asks me to take her art piece to a copy shop and to get a hundred prints made for her, I’m not infringing copyright then, either, nor is the copy shop. If I did that without permission, on the other hand, I would be. If her lawyer got wind of this and filed a suit against me without checking with her and I showed the judge the text saying “Hey hedgehog, could you do me a favor and…,” what do you think he’d say?

                Besides, Facebook does things that Lemmy instances don’t do. Facebook’s codebase isn’t open, and they’d like to reserve the ability to do different things with the content you submit. Facebook wants to be able to do non-obvious things with your content. Facebook is incorporated in California and has a value in the hundreds of billions, but Lemmy instances are located all over the world and I doubt any have a value even in the millions.

    • Skull giver@popplesburger.hilciferous.nl
      link
      fedilink
      arrow-up
      11
      ·
      8 months ago

      AI companies are hoping for a ruling that says content generated from a model trained on content is not a derivative work. So far, the Sarah Silverman lawsuit seems to be going that way, at least; the claimants were set back because they’ve been asked to prove the connection between AI output and their specific inputs.

      If this does become jurisprudence or law in one or more countries, licenses don’t mean jack. You can put the AGPL on your stuff and AI could suck it up into their model and use it for whatever they want, and you couldn’t do anything about it.

      The AI training sets for all common models contains copyright works like entire books, movies, and websites. Don’t forget that most websites don’t even have a license, and that that unlicensed work is as illegal to replicate as any book or movie normally would be, including internet comments. If AI data sets need to comply with copyright, all current AI will need to be retrained (except maybe for that image AI by that stock photo company, which is exclusively trained on licensed work).

      • verassol@lemmy.ml
        link
        fedilink
        arrow-up
        5
        ·
        8 months ago

        the claimants were set back because they’ve been asked to prove the connection between AI output and their specific inputs

        I mean, how do you do that for a closed-source model with secretive training data? As far as I know, OpenAI has admitted to using large amounts of copyrighted content, numberless books, newspaper material, all on the basis of fair use claims. Guess it would take a government entity actively going after them at this point.

        • Skull giver@popplesburger.hilciferous.nl
          link
          fedilink
          English
          arrow-up
          4
          ·
          8 months ago

          The training data set isn’t the problem. The data set for many open models is actually not hard to find, and it’s quite obvious that works by the artists were included in the data set. In this case, the lawsuit was about the Stable Diffusion dataset, and I believe that’s just freely available (though you may need to scrape and download the linked images yourself).

          For research purposes, this was never a problem: scientific research is exempted from many limitations of copyright. This led to an interesting problem with OpenAI and the other AI companies: they took their research models, the output of research, and turned them into a business.

          The way things are going, I expect the law to be like this: datasets can contain copyrighted work as long as they’re only distributed for research purposes, AI models are derivative works, and the output of AI models is not a derivative work, and therefore the output AI companies generate is exempt of copyright. It’s definitely not what I want to happen, but the legal arguments that I thought would kill this interpretation don’t seem to hold water in court.

          Of course, courts only apply law as it is written right now. At any point in time, governments can alter their copyright laws to kill or clear AI models. On the one hand, copyright lobbyists have a huge impact on governance, as much as big oil it seems, but on the other hand, banning AI will just put countries that don’t care about copyright to get an economic advantage. The EU has set up AI rules, which I appreciate as an EU citizen, but I cannot deny that this will inevitably lead to a worse environment to do business in compared to places like the USA and China.

          • verassol@lemmy.ml
            link
            fedilink
            arrow-up
            2
            ·
            8 months ago

            Thank you for sharing. Your perspective broadens mine, but I feel a lot more negative about the whole “must benefit business” side of things. It is fruitless to hold any entity whatsoever accountable when a whole worldwide economy is in a free-for-all nuke-waving doom-embracing realpolitik vibe.

            Frankly, not sure what would be worse, economic collapse and the consequences to the people, or economic prosperity and… the consequences to the people. Long term, and from a country that is not exactly thriving in the scheme side of things, I guess I’d take the former.

            • Skull giver@popplesburger.hilciferous.nl
              link
              fedilink
              English
              arrow-up
              1
              ·
              8 months ago

              It’s a tough balance, for sure. I don’t want AI companies to exist in the form they currently are, but we’re not getting the genie back into the bottle. Whether the economic hit is worth the freedom and creative rights, that I think citizens deserve, is a matter of democratic choice. It’s impossible to ignore the fact that in China or Russia, where citizens don’t have much a choice, I don’t think artistic rights or the people’s wellbeing are even part of the equation. Other countries will need a response when companies from these countries start doing work more efficiently. I myself have been using Bing AI more and more as AI bullcrap is flooding every page of every search engine, fighting AI with AI so to speak.

              I saw this whole ordeal coming the moment ChatGPT came out and I had the foolish hope that legislators would’ve done something by now. The EU’s AI Act will apply March next year but it doesn’t seem to solve the copyright problem at all. Or rather, it seems to accept the current copyright problem, as the EU’s summary put it:

              Generative AI, like ChatGPT, will not be classified as high-risk, but will have to comply with transparency requirements and EU copyright law:

              • Disclosing that the content was generated by AI
              • Designing the model to prevent it from generating illegal content
              • Publishing summaries of copyrighted data used for training

              The EU seems to have chosen to focus on combating the immediate threat of AI abuse, but seem to be very tolerant of AI copyright infringement. I can only presume this is to make sure “innovation” doesn’t get impeded too much.

              I’ll take this into account during the EU vote that’s about to happen soon, but I’m afraid it’s too late. I wish we could go back and stop AI before it started, but this stuff has happened and now the world is a little bit better and worse.

      • bitfucker@programming.dev
        link
        fedilink
        arrow-up
        2
        ·
        8 months ago

        Yep. Can’t wait to overfit LLM to a lot of copyrighted work and share it to public domain. Let’s see if OpenAI will get push back from copyright owner down the road.