How is that relevant if I’m talking about someone hosting their code on gitlab.com?
It has light mode by default and a UI that I find to be really unintuitive, but what really bothers me is that ppl go from one for-profit git host to another for-profit git host when things like Codeberg exist. With GitHub you could at least argue that you can turn your hobby project into a job since it has a huge userbase and stuff like github sponsors, but what does gitlab offer for you?
TL;DR: It’s not Codeberg
I get why ppl would use something other than github, but why do they have to torture me with gitlab?
Does anyone know about a speedtest that’s like iperf but multicore and suited for >100GbE? I’ve seen Patrick from STH use something that could do like 400GbE but I haven’t found out what it’s called
Well it’s infinite so it has to I guess
To be fair: there are many things where compression is a waste of CPU time, like fonts and about 90% of non-text media as they’re already compressed
it’s immersive
have you tried onlyoffice?
it’s astonishing how many people in !technology@lemmy.world don’t know anything about this technology
they have relays (well, most of them. looking at you, Deye), so it should be fine
a couple bad dragon stickers
do you mind sharing an example?
I didn’t read the article
Don’t worry, neither did anyone else in this thread
I hate it when a Starship Engine bursts up in flames inside my head
i think they did act in good faith, but since linus can’t handle any form of criticism and most of them are inexperienced with stuff like this they made a metric fuckton of mistakes… however, this has nothing to do with the outcome of the investigation. “setting a bar for trust higher” does not mean “the outcome is invalid because i think they paid the investigators hush money”
so now it’s just shit you made up… (i cannot believe i’m defending lmg)
We’ve thoroughly investigated ourselves
but that’s the point… they didn’t
Buy what businesses buy in bulk (e.g. thinkpad x1)
Now I want a GPT that was only trained on /b/ and /pol/
I think it’s actually about 150 PB of data that’s then also georedundantly stored in the US and Netherlands. That sounds like a lot, but I think it would be possible to distribute that amount of data