IT administrators are struggling to deal with the ongoing fallout from the faulty CrowdStrike update. One spoke to The Register to share what it is like at the coalface.

Speaking on condition of anonymity, the administrator, who is responsible for a fleet of devices, many of which are used within warehouses, told us: “It is very disturbing that a single AV update can take down more machines than a global denial of service attack. I know some businesses that have hundreds of machines down. For me, it was about 25 percent of our PCs and 10 percent of servers.”

He isn’t alone. An administrator on Reddit said 40 percent of servers were affected, along with 70 percent of client computers stuck in a bootloop, or approximately 1,000 endpoints.

Sadly, for our administrator, things are less than ideal.

Another Redditor posted: "They sent us a patch but it required we boot into safe mode.

"We can’t boot into safe mode because our BitLocker keys are stored inside of a service that we can’t login to because our AD is down.

  • Disaster@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    80% of our machines were hit. We were working through 9pm on Friday night running around putting in bitlocker keys and running the fix. Our organization made it worse by hiding the bitlocker keys from local administrators.

    Also gotta say… way the boot sequence works, combined with the nonsense with raid/nvme drivers on some machines really made it painful.

  • TheObviousSolution@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    It might be CrowdStrike’s fault, but maybe this will motivate companies to adopt better workflows and adopt actual preproduction deployment to test these sort of updates before they go live in the rest of the systems.

    • EnderMB@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 months ago

      I know people at big tech companies that work on client engineering, where this downtime has huge implications. Naturally, they’ve called a sev1, but instead of dedicating resources to fixing these issues the teams are basically bullied into working insane hours to manually patch while clients scream at them. One dude worked 36 hours straight because his manager outright told him “you can sleep when this is fixed”, as if he’s responsible for CloudStrike…

      Companies won’t learn. It’s always a calculated risk, and much of the fallout of that risk lies with the workers.

  • rozodru@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    “We can’t boot into safe mode because our BitLocker keys are stored inside of a service that we can’t login to because our AD is down.”

    backup your backups. I mean, I don’t work the IT side and i’m a developer but…isn’t it common sense to like not 100% use something to store keys where you potentially can’t log into? For me if I have a key that I need to use to decrypt something, hell even to log into discord if my 2FAs fail, I store them on a USB drive. If i’m using something and it says “you’ll need a key for backups just in case” ok cool, key goes on the drive.

    Also Microsoft should be getting just as much flak as Crowdstrike is right now. Bitlocker is god awful and the fact you need decryption keys for many devices to simply boot into safe is stupid. I remember when I still used win11 and I fucked something up and I discovered for the first time I needed a bitlocker code to simply get into safemode or recovery mode. I had no idea and thought it was so stupid. just to get into safe? really?

  • scottywh@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    If it only impacts a percentage of your machines then there was a problem in the deployment strategy or the solution wasn’t worthwhile to begin with.

    • phoenixz@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      … So your point was that it would have been better if everything went down?

      There are plentiful reasons why deployments are done in parts, and I’m guessing that after today strategies will change to apply updates in groups to avoid everything going down.

      Also, dear God, stop using windows as a server, or even a client for that matter. If you’re paying actual money to get this shit then the results are on you.

    • CaptPretentious@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      I’m the corporate world, very much Windows gets used. I know Lemmy likes a circle jerk around Linux. But in the corporate world you find various OS’s for both desktop and servers. I had to support several different OS’s and developed only for two. They all suck in different ways there are no clear winners.

      • Dark Arc@social.packetloss.gg
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        2 months ago

        It’s not just a circle jerk in this case. Windows is dominant for desktop usage but Linux has like 90% of the server market and is used for basically all new server projects.

        Paying for Windows licensing when it doesn’t benefit you, it’s silly, and that’s been realized for years.

    • EnderMB@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      To preface, I want to see a tech workers union so, so bad.

      With that said, I genuinely don’t believe that most tech workers would unionize. So many of them are brainwashed into thinking that a union would dictate all salaries, would force hiring to be domestic-only, or would ensure jobs for life for incompetent people. Anyone that knows what a union does in 2024 knows that none of that has to be true. A tech union only needs to be a flat fee every month, guaranteed access to a lawyer with experience in your cases/employer, and the opportunity to strike when a company oversteps. It’s only beneficial.

      Even if you could get hundreds of thousands of signatories, the recent layoffs have shown that tech companies at the highest level would gladly fire a sizable number of employees if it meant stamping out a union. As someone that has conducted interviews in big tech, the sheer numbers at peak of people that had applied for some roles was higher than the number of active employees in the whole company. In theory, Google could terminate everyone and replace them with brand-new workers in a few months. It would be a fucking mess, but it (in theory) shows that if a Google or Apple decided that it wanted no part of unions they could just dig into their fungible talent pool, fire a ton of people, promote people that stayed, and fill roles with foreign or under-trained talent.

      • slacktoid@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        I feel you with this. They do not see themselves as workers. Thank you for the preface.

  • catloaf@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    We can’t boot into safe mode because our BitLocker keys are stored inside of a service that we can’t login to because our AD is down.

    Someone never tested their DR plans, if they even have them. Generally locking your keys inside the car is not a good idea.

    • Zron@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      I remember a few career changes ago, I was a back room kid working for an MSP.

      One day I get an email to build a computer for the company, cheap as hell. Basically just enough to boot Windows 7.

      I was to build it, put it online long enough to get all of the drivers installed, and then set it up in the server room, as physically far away from any network ports as possible. IIRC I was even given an IO shield that physically covered the network port for after it updated.

      It was our air-gapped encryption key backup.

      I feel like that shitty company was somehow prepared for this better than some of these companies today. In fact, I wonder if that computer is still running somewhere and just saved someone’s ass.

    • ripcord@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      They also don’t seem to have a process for testing updates like these…?

      This seems like showing some really shitty testing practices at a ton of IT departments.

        • ripcord@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 months ago

          I’ve heard differently. But if it’s true, that should have been a non-starter for the product for exactly reasons like this. This is basic stuff.

          • Entropywins@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 months ago

            Companies use crowdstrike so they don’t need internal cybersecurity. Not having automatic updates for new cyber threats sorta defeats the purpose of outsourcing cybersecurity.

            • ripcord@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 months ago

              Not bothering doing basic, minimal testing - and other mitigation processes - before rolling out updates is absolutely terrible policy.

            • hangonasecond@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 months ago

              Automatic updates should still have risk mitigation in place, and the outage didn’t only affect small businesses with no cyber security capability. Outsourcing does not mean closing your eyes and letting the third party do whatever they want.

              • kent_eh@lemmy.ca
                link
                fedilink
                English
                arrow-up
                1
                ·
                2 months ago

                Outsourcing does not mean closing your eyes and letting the third party do whatever they want.

                It shouldn’t, but when the decisions are made by bean counters and not people with security knowledge things like this can easily (and frequently) happen.

    • JasonDJ@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 months ago

      I get storing bitlocker keys in AD, but as a net admin and not a server admin…what do you do with the DCs keys? USB storage in a sealed envelope in a safe (or at worst, locked file cabinet drawer in the IT managers office)?

      Or do people forego running bitlocker on servers since encrypting data-at-rest can be compensated by physical security in the data center?

      Or DCs run on SEDs?