AI companies claim their tools couldn’t exist without training on copyrighted material. It turns out, they could and it just takes more work. To prove it, AI researchers trained a model on a dataset that uses only public domain and openly licensed material.

What makes it difficult is curating the data, but once the data has been curated once, in principle everyone can use it without having to go through the painful part. So the whole “we have to violate copyright and steal intellectual property” is (as everybody already knew) total BS.

    • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
      link
      fedilink
      English
      arrow-up
      19
      ·
      14 days ago

      Indeed, intellectual property laws exist to concentrate ownership and profit in the hands of corporations, not to protect individual artists. Disney’s ruthless copyright enforcement, for instance, sharply contrasts with its own history of mining public-domain stories. Meanwhile, OpenAI scraping data at scale, it exposes the hypocrisy of a system that privileges corporate IP hoarding over collective cultural wealth. Large corporations can ignore copyright without being held to account while regular people cannot. In practice, copyright helps capitalists far more than it help individual artists.

      • BountifulEggnog [she/her]@hexbear.net
        link
        fedilink
        English
        arrow-up
        3
        ·
        14 days ago

        It’s a bit weird to refer to it in terabytes, reading the paper their biggest model was trained on 2 trillion tokens. Qwen 3 was pre trained on 36t, with post training on top of that. It’s kinda fine for what it is but this absolutely contributes to its poor performance.

  • mayo_cider [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    1
    ·
    14 days ago

    Unfortunately this doesn’t really prove anything, the training requires exponentially more training data to gain any reasonable advances

    You won’t get ChatGPT3 with legal material

    • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      14 days ago

      That’s just a limitation of current training techniques. There’s no reason to expect that new techniques can’t be developed that don’t require exponentially more data. In fact, we already see that simply making models bigger isn’t actually helping. The research is now moving towards ideas like using reinforcement learning and neurosymbolics.