What’s great about lawsuits like this is you really only have to prove intent and they have a record of them asking for similar imagery.
What’s great about lawsuits like this is you really only have to prove intent and they have a record of them asking for similar imagery.
Easiest? Tailscale., set it up on the server and each client you want to access it and it creates auto-resolving P2P VPN tunnels between them all.
I used to work for an algorithmic advertising company.
The gist is that if you get one big spender it offsets the cost of losing a thousand or more other people because those large contracts usually last past the official sale
Isn’t fusion power not as clean as people say it is?
The Practicalities of actual fusion reactors make this seem a lot less appealing than I think I grew up hearing.
I’m happy to see china continue to pump resources into their clean energy mix, but at the same time it feels like this entire concept might end up being more of a meme than we think.
Oh, yes but the DRM exemption clause means that you can backwards engineer the changes and continue releasing them under GPL
Edit: as an example we should probably be looking at the duckststion situation evolving right now:
“releasing the modified version to the public” would cover them re-closing the source and then subsequently releasing that newly closed source, so they can’t relicense it and then release the built version of the code.
At least not easily, this is where court history would likely need to be visited because the way it’s worded the interpretability of “modified” in this context would need to be examined.
Recent studies suggest the science on this is less clear than that TED talk suggests. So I wouldn’t put too much emphasis on either the idea that automation will or won’t take away net jobs.
It floors me just how many people in this thread feel like analog clock reading is a useless/outdated skill.
But I’m of the opinion that there’s no such thing as a truly outdated and useless skill, so I’m not sure I have the capability to empathize with those people…
There’s nothing stopping an analog clock face from representing 24h time:
I use guix because, while it has a small community, the packaging language is one of the easiest I’ve ever used.
Every distro I’ve tried I’ve always run into having to wait on packages or support from someone else. The package transformation scheme like what nixos has is great but Nixlang sucks ass. Being able to do all that in lisp is much preferred.
Plus I like shepherd much more than any of the other process 0’s
Firefly needs to hurry up and make a human-rated capsule instead of cargo farings.
I have high hopes for a company that can set up a rocket almost from scratch in 24 hours.
Isn’t the emperor’s tower and all the surface guns oriented toward the second option?
Seems like it’s a little of both
The OSI just published a resultnof some of the discussions around their upcoming Open Source AI Definition. It seems like a good idea to read it and see some of the issues they’re trying to work around…
https://opensource.org/blog/explaining-the-concept-of-data-information
Yes of course, there’s nothing gestalt about model training, fixed inputs result in fixed outputs
I suppose the importance of the openness of the training data depends on your view of what a model is doing.
If you feel like a model is more like a media file that the model loaders are playing back, where the prompt is more of a type of control over how you access this model then yes I suppose from a trustworthiness aspect there’s not much to the model’s training corpus being open
I see models more in terms of how any other text encoder or serializer would work, if you were, say, manually encoding text. While there is a very low chance of any “malicious code” being executed, the importance is in the fact that you can check the expectations about how your inputs are being encoded against what the provider is telling you.
As an example attack vector, much like with something like a malicious replacement technique for anything, if I were to download a pre-trained model from what I thought was a reputable source, but was man-in-the middled and provided with a maliciously trained model, suddenly the system I was relying on that uses that model is compromised in terms of the expected text output. Obviously that exact problem could be fixed with some has checking but I hope you see that in some cases even that wouldn’t be enough. (Such as malicious “official” providence)
As these models become more prevalent, being able to guarantee integrity will become more and more of an issue.
I’ve seen this said multiple times, but I’m not sure where the idea that model training is inherently non-deterministic is coming from. I’ve trained a few very tiny models deterministically before…
I’m not sure where you get that idea. Model training isn’t inherently non-deterministic. Making fully reproducible models is 360ai’s apparent entire modus operandi.
There are VERY FEW fully open LLMs. Most are the equivalent of source-available in licensing and at best, they’re only partially open source because they provide you with the pretrained model.
To be fully open source they need to publish both the model and the training data. The importance is being “fully reproducible” in order to make the model trustworthy.
In that vein there’s at least one project that’s turning out great so far:
Ironically thanks in no small part to Facebook releasing Llama and kind of salting the earth for similar companies trying to create proprietary equivalents.
Nowadays you either have gigantic LLMs with hundreds of billions of parameters like Claude and ChatGPT or you have open Models that are sub-200B.