

Looking at linuxserver/jackett
on Docker Hub, it seems it indeed update everyday.
pending anonymous user
Looking at linuxserver/jackett
on Docker Hub, it seems it indeed update everyday.
Chinese Any Government
But OLED can get burn in and degrade over time. This will too, but you can just replace the light bulb.
ALPR already exist. The situation won’t get better or worst, no matter what license you release under.
I recently switched to NextDNS. I used to run my own AdGuard Home with multiple DNS provider as upstream.
https://lemmy.ml/post/15430684
I asked a similar question before. Some recommended Revolut. Haven’t try yet tho.
It is great until the ownership and business model comes into consideration.
It’s simply the “secure” isn’t meant for users but the cooperations. Make it “secure” to their business.
For-jail-o?
It is a straight downgrade. The day you forgot to bring the dongle you are stranded.
How about torrenting?
So use what browsers? Chrome sounds more secure (I didn’t read previous post), yet I don’t want an advertising company looking at my browsing habbits, nor supporting them dominating the browser market share and have a powerful influence on every web standards.
Telegram private 🙃
I use SimpleX for this matter. Each device get its own account, and I just create a group named “self” with all of them. You can selfhost the server too if you prefer that route.
It is the recent use after free vuln actively exploited found in FF, which both Fennec and Mull relies as upstream. This compounds on changes made to Android NDK and the source of FF move into the monorepo, making them harder to build. Hence, they’re still vulnerable to the attack.
“Open Source AI” is an attempt to “openwash” proprietary systems. In their paper “Rethinking open source generative AI: open-washing and the EU AI Act” Andreas Liesenfeld and Mark Dingemanse showed that many “Open Source” AI models offer hardly more than open model weights. Meaning: You can run the thing but you don’t actually know what it is.
Basically, no.
Triangulation needs to know where the signal comes from (angles), which mode’s device doesn’t have the capability of doing this. The most they can do is estimate by signal strength.
Appflowy if Notion appels you. It is not 1.0 yet so some features you need might not be there.
I don’t understand the hostility of other comments. I had a quick look on the code and it is essentially IRC+Kiwi but in Rust, or a Matrix public chat room without signups. All chats are in memory and not saved to disk or DB. There is username but you can claim who you wants to be. The dependencies looks sane and reputable. I do saw there is a WebSocket lib being used but not found in code. However I don’t code in Rust nor I understand WS well so take a grain of salt here.
It does provide anonymity, but not privacy or confidentiality. It does what it claims.
Using 7900XTX with LMS. Speed are everwhere, driver dependent. With QwQ-32B-Q4_K_M, I got about 20 tok/s, with all VRAM filled. Phi-4 runs at about 30-40 tok/s. I can give more numbers if you can wait for a bit.
If you don’t enjoy finding which driver works best, I strongly aginst running AMD for AI workload.