• 0 Posts
  • 87 Comments
Joined 11 months ago
cake
Cake day: December 16th, 2023

help-circle






  • “releasing the modified version to the public” would cover them re-closing the source and then subsequently releasing that newly closed source, so they can’t relicense it and then release the built version of the code.

    At least not easily, this is where court history would likely need to be visited because the way it’s worded the interpretability of “modified” in this context would need to be examined.



  • WalnutLum@lemmy.mltoMemes@lemmy.mlZen Z
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    It floors me just how many people in this thread feel like analog clock reading is a useless/outdated skill.

    But I’m of the opinion that there’s no such thing as a truly outdated and useless skill, so I’m not sure I have the capability to empathize with those people…



  • I use guix because, while it has a small community, the packaging language is one of the easiest I’ve ever used.

    Every distro I’ve tried I’ve always run into having to wait on packages or support from someone else. The package transformation scheme like what nixos has is great but Nixlang sucks ass. Being able to do all that in lisp is much preferred.

    Plus I like shepherd much more than any of the other process 0’s






  • I suppose the importance of the openness of the training data depends on your view of what a model is doing.

    If you feel like a model is more like a media file that the model loaders are playing back, where the prompt is more of a type of control over how you access this model then yes I suppose from a trustworthiness aspect there’s not much to the model’s training corpus being open

    I see models more in terms of how any other text encoder or serializer would work, if you were, say, manually encoding text. While there is a very low chance of any “malicious code” being executed, the importance is in the fact that you can check the expectations about how your inputs are being encoded against what the provider is telling you.

    As an example attack vector, much like with something like a malicious replacement technique for anything, if I were to download a pre-trained model from what I thought was a reputable source, but was man-in-the middled and provided with a maliciously trained model, suddenly the system I was relying on that uses that model is compromised in terms of the expected text output. Obviously that exact problem could be fixed with some has checking but I hope you see that in some cases even that wouldn’t be enough. (Such as malicious “official” providence)

    As these models become more prevalent, being able to guarantee integrity will become more and more of an issue.