CrowdStrike effectively bricked windows, Mac and Linux today.

Windows machines won’t boot, and Mac and Linux work is abandoned because all their users are on twitter making memes.

Incredible work.

    • tiredofsametab@kbin.run
      link
      fedilink
      arrow-up
      1
      ·
      2 months ago

      Good News! Unless something has changed since I worked in healthcare IT, those systems are far too old to be impacted!

      I’m half-joking. I don’t know what that kind of equipment runs, but I would guess something embedded. The nuke-med stuff was mostly linux and various lab analyzers were also something embedded though they interface with all sorts of things (which can very well be windows). Pharmaceutical dispensers ran various linux-like OS’s (though I couldn’t even tell you the names anymore). Some medical records stuff was also proprietary, but Windows was replacing most of it near the end of my time.

      One place we had ran their keycard system all on a windows 3.1 box still. I don’t doubt some modern systems also are running on Windows which has interesting implications for getting into/out of places.

      That said, a lot of that stuff doesn’t touch the outside internet at all unless someone has done something horribly wrong. Medical records systems often do, though (including for billing and insurance stuff).

    • cheesepotatoes@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      2 months ago

      Good lord I would hope critical surgical computers like that aren’t networked externally… Somehow I’m guessing I’m wrong.

      • FlowerTree@pawb.social
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        2 months ago

        Critical surgery computers may also be running under Windows LTSC, so they might not get the CrowdStrike patch. Maybe…

        Edit: So the issue is apparently caused by CrowdStrike. So, unless the surgery computers also use CrowdStrike then it would be fine. Unless, of course, if they use CrowdStrike on surgery computers…

      • Gestrid@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        I’d heard some hospitals were affected. They cancelled appointments and non-critical surgeries.

        I’m guessing it was mostly their “behind the desk” computers that got affected, not the computers used to control the important stuff. The computers in patients’ rooms may have been affected as well, but (at least in the US) those are usually just used to record information about medicine given and other details about the patient, nothing critical that can’t be done manually.

    • half coffee@lemy.lol
      link
      fedilink
      arrow-up
      1
      ·
      2 months ago

      Anecdotal, but my spouse was in surgery during the outage and it went fine, so I imagine they take precautions (like probably having a test machine for updates before they install anything on the real one, maybe)

      • Blank@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        2 months ago

        There were no test rings for this one and it wasn’t a user controlled update. It was pushed by CS in a way that couldn’t be intercepted/tested/vetted by the consumer unless your device either doesn’t have CS installed or isn’t on an external network… or I suppose you could block CS connections at the firewall. 🤷‍♂️

      • Zacryon@feddit.org
        link
        fedilink
        arrow-up
        1
        ·
        2 months ago

        Depending on the machine, I guess it’s likely that those aren’t using Windoofs at all. I would be surprised if there were devices in use during surgery who run on that.

  • Klanky@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    I wish my Windows work machine wouldn’t boot. Everything worked fine for us. :-(

    • Affidavit@lemm.ee
      link
      fedilink
      arrow-up
      1
      ·
      2 months ago

      Could be worse. I was the only member of my entire team who didn’t get stuck in a boot loop, meaning I had to do their work as well as my own… Can’t even blame being on Linux as my work computer is Windows 11, I got ‘lucky’; I just got a couple of BSODs and the system restarted just fine.

      • Rivalarrival@lemmy.today
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        Funny, mine did a couple BSODs then restarted just fine, at first. Then a fist shaped hole appeared in the monitor and it wouldn’t turn on again.

        Weird bug.

  • PrettyFlyForAFatGuy@feddit.uk
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    2 months ago

    As a career QA, i just do not understand how this got through? Do they not use their own software? Do they not have a UAT program?

    Heads will roll for this

    • HyperMegaNet@lemm.ee
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      From what I’ve read, it sounds like the update file that was causing the problems was entirely filled with zeros; the patched file was the same size but had data in it.

      My entirely speculative theory is that the update file that they intended to deploy was okay (and possibly passed internal testing), but when it was being deployed to customers there was some error which caused the file to be written incorrectly (or somehow a blank dummy file was used). Meaning the original update could have been through testing but wasn’t what actually ended up being deployed to customers.

      I also assume that it’s very difficult for them to conduct UAT given that a core part of their protection comes from being able to fix possible security issues before they are exploited. If they did extensive UAT prior to deploying updates, it would both slow down the speed with which they can fix possible issues (and therefore allow more time for malicious actors to exploit them), but also provide time for malicious parties to update their attacks in response to the upcoming changes, which may become public knowledge when they are released for UAT.

      There’s also just an issue of scale; they apparently regularly release several updates like this per day, so I’m not sure how UAT testing could even be conducted at that pace. Granted I’ve only ever personally involved with UAT for applications that had quarterly (major) updates, so there might be ways to get it done several times a day that I’m not aware of.

      None of that is to take away from the fact that this was an enormous cock up, and that whatever processes they have in place are clearly not sufficient. I completely agree that whatever they do for testing these updates has failed in a monumental way. My work was relatively unaffected by this, but I imagine there are lots of angry customers who are rightly demanding answers for how exactly this happened, and how they intend to avoid something like this happening again.

    • jabjoe@feddit.uk
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 months ago

      The joke is Mac and Linux users, who aren’t actually effected, are incapacitated due to being busy gloating on social media.