- cross-posted to:
- technology@lemmy.ml
- cross-posted to:
- technology@lemmy.ml
AI companies claim their tools couldn’t exist without training on copyrighted material. It turns out, they could and it just takes more work. To prove it, AI researchers trained a model on a dataset that uses only public domain and openly licensed material.
What makes it difficult is curating the data, but once the data has been curated once, in principle everyone can use it without having to go through the painful part. So the whole “we have to violate copyright and steal intellectual property” is (as everybody already knew) total BS.
but i don’t like copyright laws
Indeed, intellectual property laws exist to concentrate ownership and profit in the hands of corporations, not to protect individual artists. Disney’s ruthless copyright enforcement, for instance, sharply contrasts with its own history of mining public-domain stories. Meanwhile, OpenAI scraping data at scale, it exposes the hypocrisy of a system that privileges corporate IP hoarding over collective cultural wealth. Large corporations can ignore copyright without being held to account while regular people cannot. In practice, copyright helps capitalists far more than it help individual artists.
The group built an 8 TB ethically-sourced dataset.
My question is, is this dataset also Free Range or Cage Free?
Cage-free, as hasn’t been around long enough to be in publicly owned data
Is 8tb even shit for data? I thought these things needed to feed on hundreds of terra-bytes of data
It’s a bit weird to refer to it in terabytes, reading the paper their biggest model was trained on 2 trillion tokens. Qwen 3 was pre trained on 36t, with post training on top of that. It’s kinda fine for what it is but this absolutely contributes to its poor performance.
Unfortunately this doesn’t really prove anything, the training requires exponentially more training data to gain any reasonable advances
You won’t get ChatGPT3 with legal material
That’s just a limitation of current training techniques. There’s no reason to expect that new techniques can’t be developed that don’t require exponentially more data. In fact, we already see that simply making models bigger isn’t actually helping. The research is now moving towards ideas like using reinforcement learning and neurosymbolics.