• 0 Posts
  • 41 Comments
Joined 1 year ago
cake
Cake day: July 8th, 2023

help-circle
  • Don’t use Onedrive, Dropbox or Google Drive (all privacy nightmares). Instead:

    • Self-host https://nextcloud.com/ (this is the gold standard of self-hosting a secure and private cloud storage, you just need your own server with the disk space you need. Open source)
    • P2P and/or self-host https://syncthing.net/ (this will automatically sync files in shared folders between several devices. Best if you have one device which is online all the time. Will use the space on your own devices. Open source)
    • Storage on a trustworthy 3rd party host: https://proton.me/drive (this is the most similar to Onedrive/etc. where you sync your stuff to their servers, so you don’t need to host anything, but contrary to anything from Google/MS/Dropbox, this is at least a reputable and secure/private host which doesn’t abuse or sell your data. Data is encrypted by default. Also open source)

    Furthermore, accessing Onedrive from Linux might be painfully inconvenient because there’s no official proprietary client for it by MS. There are 3rd party clients but I’m not sure how good they are, also MS could at any point change their API or even block unofficial clients, rendering your unofficial client useless at least for a time period.


  • kyub@discuss.tchncs.detoGaming@lemmy.mlCyberpunk replay has been boring.
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    6 days ago

    It’s not the game everyone hoped to be but it’s very good when including the expansion Phantom Liberty. You should give that one a try. It’s probably the best expansion CDPR has made so far, or at least on par with W3 Blood & Wine (I’m still not sure, but I have to give credit for their huge effort with Phantom Liberty). It (alongside the 2.x patches) was CDPRs genuine effort to save the game and their reputation, and I think they succeeded. The base game without the expansion can get very boring in the second half of the game which is why I consider PL to be mandatory. A good time to start Phantom Liberty is just before going to Embers to meet Hanako. If you haven’t played it for a long time, you should play it again with PL, it’s really well made.


  • I’ll do a (simplified) Windows analogy, if you’re already familiar with Windows.

    Microsoft Windows is closed-source/proprietary, which means only Microsoft has the source code for it, and only Microsoft is legally allowed to create or distribute copies of Windows. “Windows 11” for example is a “distribution” of Windows containing the “Windows NT kernel” (core of the OS) alongside other important software to make the OS usable, like a boot loader, service layer, graphical interface, desktop environment, and lots of included “system” applications like a file explorer, a web browser, apps to adjust settings, apps to display menus and task bars, and so on.

    “Linux” by itself is just the kernel, the core of the OS. Which is by itself not a “usable” operating system yet, just like holding a CPU in your hand doesn’t allow you to use it yet. More components are needed for that. Since Linux is open source and under a permissive license, anyone (even you) can go ahead and create an operating system made with the Linux kernel. If you do that, this is called a distribution or “distro” of Linux. Since there’s not just one company allowed to do that, many distributions exist. They all made their own operating system on top of the Linux kernel. Even though hundreds of distros exist, only a handful of them are actually popular, stable, secure and recommended for general use. They all use similar, but sometimes different software to include in the distribution. Like the Linux kernel, most of that software is open source so it can also be modified or extended.

    Since “Linux distribution” is rather long to write, people often just write “Linux” but mean the whole distribution, not just the kernel. These are just common inaccuracies in communication, but what the person meant should be obvious from the context.

    Common and recommendable Linux distributions (= full, usable operating systems) include: Linux Mint, Ubuntu, Fedora, OpenSuSE, Arch, Debian. These are full operating systems and they all include the Linux kernel at their core. Of course, the similarities go further than that. Most distros are similar enough that if you’ve learned one, you can also use any other with little additional things to learn. However, some distros are deliberately a bit more different or tailored to more specific users or use-cases, for example Arch targets more experienced Linux users because it’s a very minimalistic distro, it expects the user to know which packages he wants to install. It pre-installs almost nothing. You can think of this like “Windows Server Core” where it just boots into a minimalistic terminal by default, no usable GUI yet, but you can of course install the desktop environment and everything if you need it and make a full-featured desktop out of it. The distro just doesn’t want to preinstall anything which you later might not like, which is why it gives you the choice, but that makes it a minimalistic distro and it’s harder for beginners to use that way. Other distros like Mint are much more similar to the client editions of MS Windows in that they preinstall everything the user needs for a desktop OS and more, so that the user can boot into and use the desktop as quickly and easily as possible.

    And then there are even more special-purpose distributions like Kali Linux which includes things like penetration testing tools (i.e. “hacker tools”), which makes it a distribution for IT security people, so they can boot into it and have access to most needed tools right away without installing much else (also good on a bootable USB stick). But usually, in general threads like this one, people don’t talk about specific-use distros, but about generalist distros which you can install and use as a regular desktop OS.

    Desktop environments also exist on Windows but there’s basically only one, made by Microsoft. In the Linux world there are several to choose from. The most common ones are: KDE Plasma, Gnome, Cinnamon, XFCE. These desktop environments contain window managers or compositors, task bars or panels, menus, various tools like file managers, process viewers and text editors, and various background programs. This is all needed for the user to have what is commonly known as “a desktop environment”, because if you didn’t have one, you’d be basically staring at a screen containing at most a cursor and a wallpaper, with no way for you to interact with anything. Of course, these can look and feel different from each other (just like Windows looks and feels different than MacOS), and they have different features and strengths and weaknesses, but their goal is always the same. And as usual in the open source world, there’s not just one project but multiple, and out of those multiple a couple are popular, viable and stable enough so that they are usually included in most Linux distributions. Which is why most distros also give the user the choice to have a specific variant of the distribution with a specific desktop preinstalled. For example, Ubuntu also has Kubuntu (= Ubuntu with preinstalled KDE Plasma) or Xubuntu (= Ubuntu with preinstalled XFCE). These can have various names but in the end it’s just the base distribution (“Ubuntu”) with a different preinstalled “face” so to say (and you can change those faces or desktops from within the same distro, of course). Most other things are exactly the same between those distribution variants.

    As a new user, you don’t need to learn about everything. Just pick an easy to use generalist desktop distro like Linux Mint and use the default desktop environment or variant which they provide or recommend by default. You can start experimenting with more choices later on if you want, but you also don’t need to. If you have something you’re comfortable using, then you can just stick with that.




  • Windows will continue to get more and more user-hostile as time goes on, and they want everyone to have a subscription to Microsoft’s cloud services, so they can be in total control of what they deliver to the user and how the user is using their services/apps, and they also will be able to increase pricing regularly of course once the users are dependent enough (“got all my work-related data there, can’t just leave”).

    The next big step that will follow after the whole M365 and Azure will be that businesses can only deploy their Windows clients by using MS Intune, which means MS will deploy your organization’s Windows clients, not your organization. So they’re always shifting more and more control away from you and into MS’ hands. Privacy is always an obvious issue, at the very least since Nadella is CEO, but unfortunately the privacy-conscious people have kind of lost that war, because the common user (private AND business sector) doesn’t care at all, so we will have to wait and see how those things will turn out in the future, they will start caring once they are being billed more due to their openly known behavior (driving, health, eating/drinking, psychology, …) or once they are being legally threatened more (e.g. your vehicle automatically reports by itself when you’ve driven too fast, or some AI has concluded based on your gathered data that you’re likely to cause some kind of problem), or once they are rejected at or before job interviews because of leaked health data or just some (maybe wrong) AI-created prognosis of your health. So I think there will be a point when the common user will start caring, we just haven’t reached that point yet because while current data collection and profile building is problematic because it’s the stepping stone to more dystopian follow-ups, it alone is still too abstract of an issue for most people to care about it. Media is also partly to blame here when they do reviews or news about new devices and then just go like “great camera and display, MUST BUY” and never mention the absurd amount of telemetry data the device sends home. MS is also partnering with Palantir and OpenAI which will probably give them even more opportunities to automatically surveil every single one of their business and private sector users. I think M365 also already gives good analytics tools to business owners to monitor what their employees are doing, how much time they spend in each application, how “efficient” they are, things like that. Plus they have this whole person and object recognition stuff going on using “smart” cameras and some Azure service which analyzes the video material constantly. Where the employees (mostly workers in that case) are constantly surveilled and if anything abnormal happens then an automatic alert is sent, and things like that. Probably a lot of businesses will love that, and no one cares enough about the common worker’s rights. It can be sold as a security plus so it will be sold. So I think MS is heavily going into the direction of employee surveillance, since they are well-integrated into the business world anyway (especially small and medium businesses) and with Windows in particular I think they will move everything sloooowly into the cloud, maybe in 10-15 years you won’t have a “personal” computer anymore, you’re using Microsoft’s hardware and software directly from Microsoft’s servers and they will gain full, unlimited, 100% surveillance and control of every little detail you’re doing on your computer, because once you hand away that control, they can do literally anything behind your back and also never tell you about it. Most of the surveillance stuff going on all the time already is heavily shrouded in secrecy and as long as that’s the case there will be no justice system in the world being able to save you from it, because they’d first need concrete evidence. Guess why the western law enforcement and secret services hunted Snowden and Assange so heavily? Because they shone some light into what is otherwise a massive, constant cover-up that is also probably highly illegal in most countries. So it needs to be kept a secret. So the MS (and Apple, …) route stands for total dependence and total loss of control. They just have to move slowly enough for the common user not to notice. Boil the frog slowly. Make sure businesses can adapt. Make sure commercial software vendors can adapt. Then slowly direct the train into cloud-only territory where MS rules over and can log everything you do on the computer.

    Linux, on the other hand, stands for independence. It means you can pick and choose what components you want, run them whereever and however you want, build your own cloud, and so on. You can build your own distro or find one that fits your use case the most. You’re in a lot of control as the user or administrator and this will not change considering the nature of open source / free software. If the project turns to sh!t, you’re not forced to stick with it. You can fork it, develop an alternative. Or wait until someone else does. Or just write a patch that fixes the problematic behavior. This alone makes open source / free software inherently better than closed source where the users have no control over the project and always have to either use it as it is or stop using it altogether. There’s no middle ground, no fixes possible, no alternatives that can be made from the same code base because the code base is the developer’s secret. Also, open source software can be audited at will all the time. That alone makes it much more trustworthy. On the basis of trustworthiness and security alone, you should only use open source software. Linux on its own is “just” the kernel but it’s a very good kernel powering a ton of highly diverse array of systems out there, from embedded to supercomputer. I think the Linux kernel can’t be beaten and will become (or is already) the objective best operating system kernel there is out there. Now, as a desktop user, you don’t care that much about the kernel you just expect it to work in the background, and it does. What you care more is UI/UX, consistency and application/game compatibility. We can say the Linux desktop ecosystem is still lacking in that regard, always behind super polished and user-friendly coherent UIs coming from especially Apple in that regard (maybe also a little bit by Microsoft but coherent and beautiful UIs aren’t Microsoft’s strong point either, I think that crown goes to Apple). That said, Apple is very much alike Microsoft in that they have a fully locked-down ecosystem, so it’s similar to MS, maybe slightly less bad smelling still but it will probably also go in the same direction as MS does, just more slowly and with details being different. Apple’s products also appeal to a different kind of audience and businesses than MS’ products do. Apple is kind of smart in their marketing and general behavior that they always manage to kind of fly under the radar and dodge most of the shitstorms. Like they also violate the privacy of their users, but they do it slightly less than MS or Google do, so they’re less of a target and they even use that to claim they’re the privacy guys (in comparison), but they also aren’t. You still shouldn’t use Apple products/services. “Less bad than utterly terrible” doesn’t equal “good”. There’s a lot of room between that. Still, back to Linux. It’s also obviously a matter of quality code/projects and resources. Big projects like the Linux kernel itself or the major desktop environments or super important components like systemd or Mesa are well funded, have quality developers behind them and produce high quality output. Then you also have a lot of applications and components where just single community developers, not well funded at all, are hacking away in their free time, often delivering something usable but maybe less polished or less userfriendly or less good looking or maybe slightly more annoying to use but overall usable. Those applications/projects could use some help. Especially if they matter a lot on the desktop because there’s little to no alternative available. On the server side, Linux is well established, software for that scenario is plentiful and powerful. Compared to the desktop, it’s no wonder why it’s successful on servers. Yes, having corporations fund developers and in turn open source projects is important and the more that do it, the more successful those projects become. It’s no wonder that gaming for example took off so hugely after Valve poured resources and developers into every component related to it. Without that big push, it would have happened very slowly, if at all. So even the biggest corpo haters have to acknowledge that in capitalism, things can move very fast if enough money is being thrown at the problem, and very slowly if it isn’t. But the great thing about the Linux ecosystem is that almost everything is open source, so when you fund open source projects, you accelerate their growth and quality but these projects still can’t screw you over as a user, because once they do that, they can be forked and fixed. Proprietary closed-source software can always screw over the user, no one can prevent that, and it also has a tendency to do just that. In the open source software world, there are very few black sheep with anti-user features, invasive telemetry, things like that. In the corporate software world, it’s often the other way around.

    So by using Linux and (mostly) open source products, you as the user/admin remain in control, and it’s rare that you get screwed over. If you use proprietary software from big tech (doesn’t even matter which country) you lose control over your computing, it’s highly likely that you get screwed over in various ways (with much more to come in the future) and you’re also trusting those companies by running their software and they’re not even showing the world what they put in their software.


  • Clickbaity titles on videos or news sites is the new standard. I watched it. The point he’s making is basically that music was harder to make/produce some 50 years ago, so there was more incentive to “make it worth the effort”, compared to today. And the 2nd point he makes is that music consumption is now so easy as well (listen to whatever you want instantly) compared to when you could only listen to something when you bought the physical album, that there’s also less incentive for the listener to really get involved into some albums.

    Personally I think these are valid points on the surface but they are not “the answer” to this kind of multi-faceted question. They’re at best a factor but we don’t know how big these factors are. Also I think one big reason he thinks that way is because he grew up in that environment and so he has a bias for “owning physical copies of albums”.

    I also think music hasn’t gotten worse, the market is just simply over-saturated because there’s just way too much music, you’ll never be able to listen to it all. And there are absolutely hidden gems or really good bands/artists forming even today, it’s just much harder to find them. Generally a problem of today’s age: it’s likely that what you’re looking for already exists, you just have to find it within a whole ocean of content.

    If you’re looking for innovative or non-standard stuff, you can always look at smaller artists or the indie scene, same is true for movies, games, music. The big producers always have a tendency to stick to what works and what’s proven to be popular so everything becomes similar. But smaller artists do not have to care about such things, they are ready to risk much more and in doing so, you might just create a real gem or something that was never or almost never tried before.


  • The free software movement was started 40 years ago. We can’t just give up now. How many years should we wait? People are only becoming more dependent on computers and our problems keep getting worse. Windows users have been able to abandon it many years ago, but they don’t care about freedom.

    It’s not about giving up. It’s about continuing the fight while also making sure that people have real, tangible alternatives in the meantime. Look at GNU/Hurd - it might just as well never grow into something useful or competitive. Don’t put all your eggs in one basket. The first “goal” is to get rid of Windows, and Windows is for the first time in like 30 years losing one of its pillars (gaming) to Linux (and by extension also MacOS, because every non-Windows OS profits from the developments). It doesn’t matter if the overall situation isn’t perfect. It’s still real, tangible progress. Also the market share jump from < 1% (since pretty much forever) to 4% recently.

    I had the same feeling about 10 years ago, but users of proprietary software are willing to take a lot of abuse. It’s almost impressive how stubborn they are. This includes users of Reddit, Twitter, Apple and others. I don’t think Microsoft will lose any significant amount of users just by abusing them more, and when it comes to features, Windows is improving lately.

    Not by itself maybe, but in combination with Linux becoming more mainstream-viable for sure. I’ve heard from so many long-time Windows users lately that they’re considering switching to Linux in the near future. I don’t think Windows is in it for long, except on business desktops because they’re usually vendor-locked-in with special applications. Maybe a generation after that, when home users aren’t all guaranteed familiar with Windows anymore as they are today. I also don’t think people will take much more abuse, the EU is also pushing back hard against abusive US companies. Also, if the AI copilot stuff blows up or doesn’t become popular enough, Microsoft will have put all their eggs in one basket in vain. Currently it seems more like a very expensive gimmick - who needs an AI admin copilot to clean up the trash bin, change font size or toggle dark mode? Sure, you’ll be able to talk to your bot, but everything you do will be harvested and the gain you get from it is almost irrelevant. Maybe if you have a disability or so it could be cool.

    I agree that more freedom is better, but if people don’t understand the end goal, they will keep making the same mistakes. SteamOS is proprietary. Most of the popular GNU/Linux distros have proprietary software in their repositories. On mobile I see people switching from proprietary Android to proprietary Sailfish OS. They just keep falling in the same traps over and over again. Steam is one of those traps. If GNU/Linux became mainstream on desktop today, I have no doubt that it would be a proprietary distro. Then it will be only a matter of time before it turns into something even more proprietary like Windows. Because why wouldn’t it?

    I don’t think it would. It would be a mixture of libre software and propirietary software, which is better than 100% proprietary software still. The most important component is the OS itself.

    That’s why we must explain it to them. Some will listen and others will not, but there is nothing else we can do. We are doing our best to rival the proprietary apps, but it’s a battle we’ve been fighting for 40 years. There will always be something missing and even if there isn’t, it will always be inconvenient to switch from something you already know. Reddit users could switch to Lemmy, but they won’t. If at some point they decide to switch to some other proprietary alternative, that will not fix their problem. It will be only a matter of time before the other company or developer starts abusing them too.

    Yes, we must continue advocating for libre software. However, it’s still time to celebrate the beginning of the end of Windows.

    I know, but if we make compromises on our freedom, we will never keep it. The companies that make proprietary software will not let us. They could make money from developing libre software instead, but they don’t have to, because our society thinks non free software is fine.

    We will keep enough freedom. It’s a gradient. The world isn’t black and white. Playing a proprietary game or playing back a BluRay on an otherwise fully free system is still much more progress than running 100% proprietary sofware. Change also won’t come in a perfect way. First, desktop Linux needs to fight back on equal footing against Windows, and that (unfortunately) means it needs to be able to run whatever proprietary apps or games the users still need. Because otherwise they wouldn’t switch and your utopia would remain an utopia without any measurable progress towards it.


  • https://piped.video/watch?v=KW6E51xXcWc for Valve’s contributions, by a KDE dev. According to a 2022 interview they pay over 100 open source developers working full-time on various important open source projects, from Mesa to Vulkan to AMD GPU drivers to KDE Plasma to gamescope to Wine to DXVK and VKD3D to you name it. The whole desktop ecosystem is benefitting from this, not just the Steam Deck, and not just gaming.

    I get that proprietary software and DRM is a general problem, and Steam is part of that problem, but completely getting rid of that is simply a battle for another time, further in the future. The first battle is to get Windows users abandon their Microsoft/Apple cages, and that’s a win that’s actually within reach now. Windows also becomes worse by itself, further accelerating the change. That’s important, because running a proprietary OS is still much worse than running some proprietary applications or games on a free OS. A closed OS completely shifts control away from the user, leaving only what the developer allows you to do, and it allows the dev to always push his or her agenda by favoring applications from the same developer, and allowing the developer to establish proprietary APIs and libraries like DirectX which was problematic for the competition for quite some time. Establishing Linux as a neutral, user-controlled, non-proprietary, much more trustworthy OS is the first step away from that. And to reach that, users will have to be able to run at least some of their usual applications or games on Linux as well. Otherwise they simply wouldn’t switch in the first place. For a regular user, using Linux cannot feel like being a downgrade. A regular user does not understand the ethics behind closed and open source and will never choose a worse free option over a better proprietary one. That either means the free options must become true rivals, or - which is the easier goal for now - the proprietary apps have to run on Linux just as well as people are used to.

    A “war” isn’t being won all at once instantly, but by winning several smaller battles after one another. Which takes time.



  • kyub@discuss.tchncs.detoLinux@lemmy.mlDiscord rich presence on linux (game activity)
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    8 months ago

    Discord has a nice UI and lots of neat features, and it’s popular among gamers especially, but it can hardly be recommended. See https://www.messenger-matrix.de/messenger-matrix-en.html for a comparison with other communication programs. Yes, Discord has approximately the most red flags there can be. Discord is essentially spyware, it supports the least amount of encryption, security and privacy techniques out of them all, and everything you type, write, say and show on it is being processed and analyzed by the Discord server and probably in turn sold to 3rd parties. Discord can’t make a living from selling paid features only, they have to sell tons of user data, and since all data is basically unencrypted, everything’s free for the taking. Discord doesn’t even try to hide it in the terms of service or so. They just plainly state they’re collecting everything. Well, at least they’re honest about it. It’s a minor plus. If I had to use Discord, I’d only ever use the web browser version, and I’d at least block its API endpoints for collecting random telemetry and typing data (it doesn’t only collect what you sent, it also collects what you started typing).

    Matrix, on the other hand, is a protocol. Element is a well-known Matrix client implementing the protocol. On Matrix, everything is encrypted using quite state of the art encryption. It’s technologically much more advanced than Discord is. It’s also similar, but it won’t reach feature parity with Discord. Discord is a much faster moving target, and it’s much easier for the Discord devs because they need to, oh, take care of exactly nothing while developing it further. While adding a new feature to Matrix is much more complicated because almost everything has to be encrypted and still work for the users inside the chat channels.

    This is just broadly written for context. The two are similar, and you should prefer Matrix whenever possible, but I do get that Discord is popular and as is the case with popular social media or communication tools, at some point you have to bite the bullet when you don’t want to be left out of something. I’m just urging everyone to keep their communication and usage on Discord to an absolute minimum, never install any locally running software from them (maybe using sandboxing), and when you’re chatting or talking on Discord, try to restrict yourself to the topics at hand (probably gaming) and don’t discuss anything else there. Discord is, by all measurements I know, the worst privacy offender I can think about. Even worse than Facebook Messenger, WhatsApp and such stuff, because they at least have some form of data protection implemented, even if they also collect a lot of stuff, especially all metadata.



  • Choice of distro isn’t as important anymore as it used to be in the past. There’s containerization and distro-independent packaging like Flatpak or AppImage. Also, most somewhat popular distors can be made to run anything, even things packaged for other distros. Sure, you can make things easier for yourself choosing the right distro for the right use case, but that’s unfortunately a process you need to go through yourself.

    Generally, there’s 3 main “lines” of popular Linux distros: RedHat/SuSE (counting them together because they use the same packaging format RPM), Debian/Ubuntu, and Arch. Fedora and OpenSuSE are derived from RedHat and SuSE respectively, Ubuntu is derived from Debian but also stands on its own feet nowadays (although both will always be very similar), Mint and Pop!OS are both derived from Ubuntu so will always be similar to Ubuntu and Debian as well), and Endeavour is derived from Arch.

    I’d recommend using Fedora if you don’t like to tinker much, otherwise use Arch or Debian. You can’t go wrong with any of those three, they’ve been around forever and they are rock solid with either strong community backing or both strong community and company backing in the case of Fedora. Debian is, depending on edition, less up to date than the other two, but still a rock solid distro that can be made more current by using either the testing or unstable edition and/or by installing backports and community-made up to date packages. It’s more work to keep it updated of course. Don’t be misled by Debian’s labels - Debian testing at least is as stable as any other distro.

    Ubuntu is decent, just suffers from some questionable Canonical decisions which make it less popular among veterans. Still a great alternative to Debian, if you’re hesitant about Debian because of its software version issues, but still want something very much alike Debian. It’s more current than Debian, but not as current as a rolling or semi-rolling release distro such as Arch or Fedora.

    OpenSuSE is probably similar in spirit and background to Fedora, but less popular overall, and that’s a minus because you’ll find less distro-specific help for it then. Still maybe a “hidden gem” - whenever I read about it, it’s always positive.

    Endeavour is an alternative to Arch, if pure Arch is too “hard” or too much work. It’s probably the best “Easy Arch-based” distro out of all of them. Not counting some niche stuff like Arco etc.

    Mint is generally also very solid and very easy, like Ubuntu, but probably better. If you want to go the Ubuntu route but don’t like Ubuntu that much, check out Mint. It’s one of the best newbie-friendly distros because it’s very easy to use and has GUI programs for everything.

    Pop!OS is another Ubuntu/Mint-like alternative, very current as well.

    For gaming and new-ish hardware support, I’d say Arch, Fedora or Pop!OS (and more generally, rolling / semi-rolling release distros) are best suited.

    Well that’s about it for the most popular distros.


  • kyub@discuss.tchncs.detoLinux@lemmy.mlWhat is the /opt directory?
    link
    fedilink
    English
    arrow-up
    47
    ·
    edit-2
    8 months ago

    Let’s say you want to compile and install a program for yourself from its source code form. There’s generally a lot of choice here:

    You could (theoretically) use / as its installation prefix, meaning its binaries would then probably go underneath /bin, its libraries underneath /lib, its asset files underneath /share, and so on. But that would be terrible because it would go against all conventions. Conventions (FHS etc.) state that the more “important” a program is, the closer it should be to the root of the filesystem (“/”). Meaning, /bin would be reserved for core system utilities, not any graphical end user applications.

    You could also use /usr as installation prefix, in which case it would go into /usr/bin, /usr/lib, /usr/share, etc… but that’s also a terrible idea, because your package manager respectively the package maintainers of the packages you install from your distribution use that as their installation prefix. Everything underneath /usr (except /usr/local) is under the “administration” of your distro’s packages and package manager and so you should never put other stuff there.

    /usr/local is the exception. It’s where it’s safe to put any other stuff. Then there’s also /opt. Both are similar. Underneath /usr/local, a program would be traditionally split up based on file type - binaries would go into /usr/local/bin, etc. - everything’s split up. But as long as you made a package out of the installation, your package manager knows what files belong to this program, so not a big deal. It would be a big deal if you installed it without a package manager though - then you’d probably be unable to find any of the installed files when you want to remove them. /opt is different in that regard - here, everything is underneath /opt/<programname>/, so all files belonging to a program can easily be found. As a downside, you’d always have to add /opt/<programname>/ to your $PATH if you want to run the program’s executable directly from the commandline. So /opt behaves similar to C:\Program Files\ on Windows. The other locations are meant to be more Unix-style and split up each program’s files. But everything in the filesystem is a convention, not a hard and fast rule, you could always change everything. But it’s not recommended.

    Another option altogether is to just install it on a per-user basis into your $HOME somewhere, probably underneath ~/.local/ as an installation prefix. Then you’d have binaries in ~/.local/bin/ (which is also where I place any self-writtten scripts and small single scripts/executables), etc. Using a hidden directory like .local also means you won’t clutter your home directory visually so much. Also, ~/.local/share, ~/.local/state and so on are already defined by the XDG FreeDesktop standards anyway, so using ~/.local is a great idea for installing stuff for your user only.

    Hope that helps clear up some confusion. But it’s still confusing overall because the FHS is a historically grown standard and the Unix filesystem tree isn’t really 100% rational or well-thought out. It’s a historically grown thing. Modern Linux applications and packaging strategies do mitigate some of its problems and try to make things more consistent (e.g. by symlinking /bin to /usr/bin and so on), but there are still several issues left over. And then you have 3rd party applications installed via standalone scripts doing what they want anyway. It’s a bit messy but if you follow some basic conventions and sane advice then it’s only slightly messy. Always try to find and prefer packages built for your distribution for installing new software, or distro-independent packages like Flatpaks. Only as a last resort you should run “installer scripts” which do random things without your package manager knowing about anything they install. Such installer scripts are the usual reason why things become messy or even break. And if you build software yourself, always try to create a package out of it for your distribution, and then install that package using your package manager, so that your package manager knows about it and you can easily remove or update it later.


  • OP is somewhat correct, but still “short-sighted” with a misleading conclusion. All these valid downsides should be mentioned, but as always there are pros and cons to everything, and in Valve’s case, the pros still outweigh the cons, and you always have to weigh pros and cons against each other.

    Valve has done a lot in the last ~10 years to push desktop Linux for mainstream gaming viability and several other features as well (open source shader compiler, Direct3D-to-Vulkan translation stuff, HDR support in KDE Plasma, lots of improvements for the open source AMD GPU drivers, and much more stuff). You can’t simply disregard that. Sure, there are lots of companies involved in improving Linux - but it’s mostly for the server side or the enterprise desktop segment. Almost no big company invests meaningful amount of resources into improving the common Linux desktop significantly and challenging Windows’ dominance for home entertainment/gaming, read: the casual home user. Valve did just that, of course also mostly for their own reasons, but their own reasons still do benefit general desktop Linux massively, and they are almost alone in doing so. And I probably don’t have to mention that having a rich company investing lots of money into pushing stuff does really help development speed. The development pace of the Linux kernel for example is only so high because many big corps spend developers and resources on it to improve it for their own data center use cases. Almost no one (again, except Valve) pours any significant amount of resources/devs into the desktop Linux ecosystem and drivers so far.

    Look at GOG - in theory a shining example of how to do several things better than Valve (no DRM, etc.), but they still do close to nothing for desktop Linux, probably because they lack the resources or see it as a wasted effort overall. Like many companies do – the typical chicken-egg-problem. Linux won’t be better supported by companies until its market share grows, but its market share won’t grow until it is better supported by companies. The GOG Galaxy client probably still has no Linux version. That’s just how things have been for a long time and I’m glad to have Valve really be serious about it and demonstrate it publicly that this can work and that this is an example for other companies to also look at it. Their exact reasons or methods don’t even matter - we need companies pushing desktop Linux, or otherwise you can still sit in a corner and cry about Windows’ dominance in 2050 still because nothing really changes on a fundamental level fast enough. Which is why I see it as important to be favorable to Valve for doing this when no one else is doing it. If you want things to change, then do support changes that meaningfully contribute to Windows losing exclusive market share in certain areas like gaming, and tons of people will migrate away from Windows over time because they will start seeing Linux as a viable, practical alternative, not just a theoretical thing. Sure, always be mindful of any disadvantages. But please don’t act as if there weren’t any major advantages as well.

    Be glad for how things are developing currently. It could always be better, sure. But it could also be massively worse. And it has been massively worse for a long time. It’s high time to change, and desktop Linux needs all the help it can get to become mainstream. It’s on its way there, thankfully, but that way hasn’t been so clear all the time. Desktop Linux share has always been sub-1% for many, many years. Only very recently it made significant strides forward.





  • Not really relevant. The majority of teens isn’t able to make an informed decision about which is better anyway, and in fact none of the 2 is recommended anyway unless you count in AOSP-based distributions (based off of the open source Android without Google apps), then Android wins of course. But when you compare iOS vs. proprietary Android, it’s like comparing 2 different forms of diseases.

    So yeah while statistics are interesting it’s important not to interpret too much into some. Like, “majority of teens dislikes Jazz music”. Well, it doesn’t really matter whether they dislike it or not. Popularity doesn’t represent quality necessarily. Sometimes, but certainly not always.

    In Germany the mobile landscape is more “diverse”, I’d say closer to 40%/60% iOS/Android from my own observations. And since we “care” “more” about privacy in schools or public institutions (we still care plenty little but I guess Germany is on average at least known for being a country that does more for data protection than others, so maybe that counts as something?), it’s also probably less iOS infested, although I do know that some schools and public institutions do use iOS devices. But I don’t think everyone does.


    1. False promises early on

    We desktop Linux users are partly to blame for this. In ~1998 there was massive hype and media attention towards Linux being this viable alternative to Windows on the desktop. A lot of magazines and websites claimed that. Well, in 1998 I can safely say that Linux could be seen as an alternative, but not a mainstream compatible one. 25 years later, it’s much easier to argue that it is, because it truly is easy to use nowadays, but back then, it certainly wasn’t yet. The sad thing is, that we Linux users kind of caused a lot of people to think negatively about desktop Linux, just because we tried pushing them towards it too early on. A common problem in tech I think, where tech which isn’t quite ready yet is being hyped as ready. Which leads to the second point:

    1. FUD / lack of information / lack of access to good, up to date information

    People see low adoption rates, hear about “problems” or think it’s a “toy for nerds”, or still have an outdated view on desktop Linux. These things stick, and probably also cause people to think “oh yeah I’ve heard about that, it’s probably nothing for me”

    1. Preinstallations / OEM partnerships

    MS has a huge advantage here, and a lot of the like really casual ordinary users out there will just use whatever comes preinstalled on their devices, which is in almost 100% of all cases Windows.

    1. Schools / education

    They still sometimes or even often(?) teach MS product usage, to “better prepare the students for their later work life where they almost certainly use ‘industry standard’ software like MS Office”. This gets them used to the combo MS Windows+Office at an early age. A massive problem, and a huge failure of the education system to not be neutral in that regard.

    1. Hardware and software devs ALWAYS ensure that their stuff is compatible with Windows due to its market share, but don’t often ensure this for Linux, and whether 3rd party drivers are 100% feature complete or even working at all, is not sure

    So you still need to be a bit careful about what you use (hardware & software) on Linux, while for Windows it’s pretty much “turn your brain off, pick anything, it’ll work”. Just a problem of adoption rate though, as Linux grew, its compatibility grew as well, so this problem decreased by a lot already, but of course until everything will also automatically work on Linux, and until most devs will port their stuff to Linux as well as Windows and OS X, it will still need even more market share for desktop Linux. Since this is a known chicken-egg-effect (Linux has low adoption because software isn’t available, but for software to become available, Linux marketshare needs to grow), we need to do it anyway, just to get out of that “dilemma”. Just like Valve did when they said one day “ok f*ck this, we might have problems for our main business model when Microsoft becomes a direct competitor to Steam, so we must push towards neutral technologies, which is Linux”. And then they did, and it worked out well for them, and the Linux community as a whole benefited from this due to having more choice now on which platforms their stuff can run. Even if we’re talking about a proprietary application here, it’s still a big milestone when you can run so many more applications/games suddenly on Linux, than before, and it drives adoption rates higher as well. So there you have a company who just did it, despite market share dictating that they shouldn’t have done that. More companies need to follow, because that will also automatically increase desktop Linux marketshare, and this is all inter-connected. More marketshare, more devs, more compatibility, more apps available, and so on. Just start doing it, goddamnit. Staying on Windows means supporting the status quo and not helping to make any positive progress.

    1. Either the general public needs to become more familiar with CLI usage (I’d prefer that), or Linux desktop applications need to become more feature-complete so that almost everything a regular user needs can be done via GUI as well

    This is still not the case yet, but it’s gotten better. Generally speaking: If you’re afraid of the CLI, Linux is not something for you probably. But you shouldn’t be afraid of it. You also aren’t afraid of chat prompts. Most commands are easy to understand.

    1. The amount of choice the user is confronted with (multiple distros, desktop environments, and so on) can lead to option paralysis

    So people think they either have to research each option (extra effort required), or are likely to “choose wrong”, and then don’t choose at all. This is just an education issue though. People need to realize that this choice isn’t bad, but actually good, and a consequence of an open environment where multiple projects “compete” for the same spot. Often, there are only a few viable options anyway. So it’s not like you have to check out a lot. But we have to make sure that potential new users know which options are a great starting point for them, and not have them get lost in researching some niche distros/projects which they shouldn’t start out with generally.

    1. “Convenience is a drug”

    Which means a lot of people, even smart ones, will not care about any negatives as long as the stuff they’re using works without any perceived user-relevant issues. Which means: they’ll continue to use Windows even after it comes bundled with spyware, because they value the stuff “working” more than things like user control/agency, privacy, security and other more abstract things. This is problematic, because they position themselves in an absolute dependency where they can’t get out of anymore and where all sorts of data about their work, private life, behavior, and so on is being leaked to external 3rd parties. This also presents a high barrier of convincing them to start becoming more technically independent: why should they make an effort to switch away from something that works in their eyes? This is a huge problem. It’s the same with Twitter/X or Reddit, not enough people switch away from those, even though it’s easy to do nowadays. Even after so much negative press lately most still stick around. It’s so hard to get the general population moving to something better once they’ve kind of stuck with one thing already. But thankfully, at least on Windows, the process of “enshittification” (forced spyware, bloatware, adware, cloud integrations, MS accounts) continues at a fast pace, which means many users won’t need to be convinced to use Linux, but rather they will at some point be annoyed by Windows/Microsoft itself. Linux becoming easier to use and Windows becoming more annoying and user-hostile at the same time will thankfully accelerate the “organic” Linux growth process, but it’ll still take a couple of years.

    1. “Peer pressure” / feeling of being left alone

    As a desktop Linux user, chances are high that you’re an “outsider” among your peers who probably use Windows. Not everyone can feel comfortable in such a role over a longer period of time. Just a matter of market share, again, but still can pose a psychological issue maybe in some cases. Or it can lead to peer pressure, like when some Windows game or something isn’t working fully for the Linux guy, that there will be peer pressure to move to Windows just to get that one working. As one example.

    1. Following the hype of new software releases and thinking that you always need the most features or that you need the “industry standard” when you don’t really need it.

    A lot of users probably prefer something like MS Office with its massive feature set and “industry standard” label over the libre/free office suites. Because something that has less features could be interpreted as being worse. But here it’s important to educate such users that it really only matters whether all features they NEED are present. And if so, it wouldn’t matter for them which they use. MS Office for example has a multi-year lead in development (it was already dominating the office suite market world-wide when Linux was still being born so to say) so of course it has more features accumulated over this long time, but most users actually don’t need them. Sure, everyone uses a different subset of features, but it’s at least likely that the libre office suites contain everything most users need. So it’s just about getting used to them. Which is also hard, to make a switch, to change your workflows, etc., so it would be better if MS Office could work on Linux so that people could at least be able to continue to use that even though it’s not recommended to do so (proprietary, spyware, MS cloud integrations). But since I’m all for having more options, it would at least be better in general for it to be available as well. But until that happens, we need to tell potential new users that they probably can also live with the alternatives just fine.