- cross-posted to:
- technology@hexbear.net
- cross-posted to:
- technology@hexbear.net
It looks like the west has been caught with its pants down, China is developing what seems to be far more efficient AI tech, perhaps while big tech are motivated by money and IP theft, China has just got on with developing better ideas.
This also seems to be a major win for Open Source models, which is a good thing. Could also be a good thing for the EU who want to develop their own AI solutions to break from US big tech.
Genuinely curious, what makes you think that deepseek has been built without ip theft?
I guess it depends on what kind of ipntheft you mean.
I was thinking about the training data, of which you need massive amounts to train. And as far as I know, pretty much all companies have worked on a scraping basis, rather than paying for (or even asking for).
What kind of ip theft were you thinking of?
I was referring to both scraping to create the models and using the models to create infringing content.
Ah yes, they must be stealing IP from the future when they publish novel papers on things nobody’s done before!
Snark aside, thanks for clarifying which kind of ip theft was meant, because this is not the kind of ip theft that is normally associated with training models.
I’m personally against copyrights as a concept and absolutely don’t care about this aspect, especially when it comes to open models. The way I look at is that the model is unlocking this content and making this knowledge available to humanity.
That’s nice, dear.
It is rare that I fail to get the gist of what is being said in these technical explanations, but this one has me actually wondering about the gist of the gist. Some of it made me feel like it was made up nonsense.
It seemed pretty clear to me. If you have any clue on the subject then you presumably know about the interconnect bottleneck in traditional large models. The data moving between layers often consumes more energy and time than the actual compute operations, and the surface area for data communication explodes as models grow to billions parameters. The mHC paper introduces a new way to link neural pathways by constraining hyper-connections to a low-dimensional manifold.
In a standard transformer architecture, every neuron in layer N potentially connects to every neuron in layer N+1. This is mathematically exhaustive making it computationally inefficient. Manifold constrained connections operate on the premise that most of this high-dimensional space is noise. DeepSeek basically found a way to significantly reduce networking bandwidth for a model by using manifolds to route communication.
Not really sure what you think the made up nonsense is. 🤷





