• Delta Air Lines CEO Ed Bastian said the massive IT outage earlier this month that stranded thousands of customers will cost it $500 million.
  • The airline canceled more than 4,000 flights in the wake of the outage, which was caused by a botched CrowdStrike software update and took thousands of Microsoft systems around the world offline.
  • Bastian, speaking from Paris, told CNBC’s “Squawk Box” on Wednesday that the carrier would seek damages from the disruptions, adding, “We have no choice.”
  • Echo Dot
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    5 months ago

    Delta could have spent any number smaller than $500,000,000 on competent IT staffing and prevented this at a lower cost than letting it happen.

    I guarantee someone in their IT department raised the point of not just downloading updates. I can guarantee they advise to test them first because any borderline competent I.T professional knows this stuff. I can also guarantee they were ignored.

    • ricecake@sh.itjust.works
      link
      fedilink
      arrow-up
      5
      ·
      5 months ago

      Also, part of the issue is that the update rolled out in a way that bypassed deployments having auto updates disabled.

      You did not have the ability to disable this type of update or control how it rolled out.

      https://www.crowdstrike.com/blog/falcon-content-update-preliminary-post-incident-report/

      Their fix for the issue includes “slow rolling their updates”, “monitoring the updates”, “letting customers decide if they want to receive updates”, and “telling customers about the updates”.

      Delta could have done everything by the book regarding staggered updates and testing before deployment and it wouldn’t have made any difference at all. (They’re an airline so they probably didn’t but it wouldn’t have helped if they had).

      • corsicanguppy@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        5 months ago

        Delta could have done everything by the book

        Except pretty much every paragraph in ISO27002.

        That book?

        Highlights include:

        • ops procedures and responsibilities
        • change management (ohh. That’s a good one)
        • environmental segregation for safety (ie don’t test in prod)
        • controls against malware
        • INSTALLATION OF SOFTWARE ON OPERATIONAL SYSTEMS
        • restrictions on software installation (ie don’t have random fuckwits updating stuff)

        …etc. like, it’s all in there. And I get it’s super-fetch to do the cool stuff that looks great on a resume, but maybe, just fucking maybe, we should be operating like we don’t want to use that resume every 3 months.

        External people controlling your software rollout by virtue of locking you into some cloud bullshit for security software, when everyone knows they don’t give a shit about your apps security nor your SLA?

        Glad Skippy’s got a good looking resume.

        • ricecake@sh.itjust.works
          link
          fedilink
          arrow-up
          3
          ·
          5 months ago

          Yes, that book. Because the software indicated to end users that they had disabled or otherwise asserted appropriate controls on the system updating itself and it’s update process.

          That’s sorta the point of why so many people are so shocked and angry about what went wrong, and why I said “could have done everything by the book”.

          As far as the software communicated to anyone managing it, it should not have been doing updates, and cloudstrike didn’t advertise that it updated certain definition files outside of the exposed settings, nor did they communicate that those changes were happening.

          Pretend you’ve got a nice little fleet of servers. Let’s pretend they’re running some vaguely responsible Linux distro, like a cent or Ubuntu.
          Pretend that nothing updates without your permission, so everything is properly by the book. You host local repositories that all your servers pull from so you can verify every package change.
          Now pretend that, unbeknownst to you, canonical or redhat had added a little thing to dnf or apt to let it install really important updates really fast, and it didn’t pay any attention to any of your configuration files, not even the setting that says “do not under any circumstances install anything without my express direction”.
          Now pretend they use this to push out a kernel update that patches your kernel into a bowl of luke warm oatmeal and reboots your entire fleet into the abyss.
          Is it fair to say that the admin of this fleet is a total fuckup for using a vendor that, up until this moment, was generally well regarded and presented no real reason to doubt while being commonly used? Even though they used software that connected to the Internet, and maybe even paid for it?

          People use tools that other people build. When the tool does something totally insane that they specifically configured it not to, it’s weird to just keep blaming them for not doing everything in-house. Because what sort of asshole airline doesn’t write their own antivirus?

          • rekorse@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            5 months ago

            General practices aside, should they really not plan anybackups system though? Crowd strike did not cause 500 million in damages to delta, deltas disaster recovery response did.

            Where do we draw the line there though I’m not sure. If you set my house on fire but the fire department just stands outside and watches it burn for no reason, who should I be upset with?

            • ricecake@sh.itjust.works
              link
              fedilink
              arrow-up
              1
              ·
              5 months ago

              Well, in your example you should be mad at yourself for not having a backup house. 😛

              There’s a lot of assumptions underpinning the statements around their backup systems. Namely, that they didn’t have any.
              Most outage backups focus on datacenter availability, network availability, and server availability.
              If your service needs one server to function, having six servers spread across two data centers each with at least two ISPs is cautious, but prudent. Particularly if you’re setup to do rolling updates, so only one server should ever be “different” at a time, leaving you with a redundant copy at each location no matter what.
              This goes wrong if someone magically breaks every redundant server at the same time. The underlying assumption around resiliency planning is that random failure is probabilistic in nature, and so by quantifying your failure points and their failure probability you can tune your likelihood of an outage to be arbitrarily low (but never zero).
              If your failure isn’t random, like a vendor bypassing your update and deployment controls, then that model fails.

              A second point: an airline uses computers that aren’t servers, and requires them for operations. The ticketing agents, the gate crew that manages where people sit and boarding, the ground crew that need to manage routine inspection reports, the baggage handlers that put bags on the right cart to get them to the right plane, and office workers who manage stuff like making sure fuel is paid for, that crews are ready for when their plane shows up and all that stuff that goes into being an airline that isn’t actually flying planes.
              All these people need computers, and you don’t typically issue someone a redundant laptop or desktop computer. You rely on hardware failures being random, and hire enough IT staff to manage repairs and replacement at that expected cadence, with enough staff and backup hardware to keep things running as things break.

              Finally, if what you know is “computers are turning off and not coming back online”, your IT staff is swamped, systems are variously down or degraded, staff in a bunch of different places are reporting that they can’t do their jobs, your system is in an uncertain and unstable position. This is not where you want a system with strict safety requirements to be, and so the only responsible action is to halt operations, even if things start to recover, until you know what’s happening, why, and that it won’t happen again.

              As more details have come out about the issues that Delta is having, it appears that it’s less about system resiliency, although needing to manually fix a bunch of servers was a problem, and more that the scale of flight and crew availability changes overloaded that aforementioned scheduling system, making it difficult to get people and planes in the right place at the right time.
              While the application should be able to more gracefully handle extremely high loads, that’s a much smaller failure of planning than not having a disaster recovery or redundancy plan.

              So it’s more like I built a house with a sprinkler system, and then you blew it up with explosives. As the fire department and I piece it back together, my mailbox fills with mail and tips over into a creek, so I miss paying my taxes and need to pay a penalty.
              I shouldn’t have had a crap mailbox, but it wouldn’t have been a problem if you hadn’t destroyed my house.

              • rekorse@lemmy.world
                link
                fedilink
                arrow-up
                1
                ·
                4 months ago

                First thank you for taking the time to type all of that out.

                I think I follow your theory well enough but (I know this is 2 weeks later so I won’t look up any new information) I was under the impression delta was an outlier in their response compared to other airlines.

                And one point about redundancies. Why shouldnt they consider a single operating system as a single failure point? If all 6 servers in the multiple locations all run windows, and windows fails thats awful right? Can they not dual boot orhavee a second set of servers? I do this in my own home but maybe thats not something that scales well.

                I’m interested if your opinion has changed now that there has been a bit of time to have some more data come out on it.

                • ricecake@sh.itjust.works
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  4 months ago

                  You are correct that Delta was an outlier, but it wasn’t with regards to the scale of the outage, it was that their scheduling software was down far longer and they handled a lot of the customer side of things significantly less well.

                  Generally, your protection against operating system issues is the aforementioned restriction on changes and how they go out.
                  If something is stable, you can expect it to remain stable unless something changes or random chance breaks something.
                  The operational cost of running multiple operating systems in production like you describe would be high. Typically software is only written to work on one platform, and while it can be modified to work on others, it’s usually a cost with no benefit outside of a consumer environment.
                  Different operating systems have different performance characteristics you need to factor in for load scaling, different security models, and different maintenance requirements.
                  Often, but not always, server administrators will focus on one OS, so adding more to the mix can mean people are rusty with whichever is your backup, which can be worse than just focusing on fixing the issue with the primary.
                  OS bugs are rare, and they usually manifest early or randomly. It’s why production deployments tend to use the OS as long as it’s supported: change means learning the new issues and you’ve probably already encountered all the bullshit with what you’re currently using. That’s why the Linux distros tend to have long term support versions, and windows server edition tends to just get support for a long time with terrible documentation.

                  I’m a Linux guy, so defending windows feels weird, and I want to include that I don’t think anyone should use it, particularly for a server, but the professional in me acknowledges that it’s a perfectly functional hammer.

                  As we’ve learned more, I’ve become more disparaging of deltas choice to not keep the scheduling system modernized in a way that could recover faster, and not investing enough in making systems homogeneous across different airports. I still think that these issues are largely independent of their actual disaster recovery or resiliency plans.
                  Inevitably, the lawsuits will determine that the blame for the damage is split between the two of them. My bet is 70/30 crowdstrike/delta, since they can easily demonstrate that the issue was fundamentally caused by crowdstrike and negatively impacted other airlines and businesses in general. Some was clearly deltas fault for just failing to keep a system modernized to handle a massive shift like this, and would have been similarly disrupted by any outage with flight cancellations.

                  • rekorse@lemmy.world
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    4 months ago

                    Would you say that an OS forced update type error like this is so rare that Delta didnt need to plan for it? If I understand you right, its not actually a problem that Delta used Windows for their servers, at least not to the point it would affect liability.

                    If Delta was the only airline who set up their infrastructure in this way, to the point it was markedly different than other companies, could they argue they essentially didnt protect at all?

                    I’m still having a lot of trouble figuring out how CrowdStrike would even assess a risk like this if the possible payment is based on how well a company recovers and how much income they lost.

                    I actually agree with your 70/30 split but unless Delta paid more than the other airlines to justify the pay out in damages, its still confusing to me how the amount CrowdStrike has to pay to some degree does depend on Deltas setup and restoration.

                    I think theres just not any better of a way to handle this and I’m searching for an answer that doesnt exist.

      • Dran@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        5 months ago

        yes, the incompetence was a management decision to allow an external vendor to bypass internal canary deployment processes.

      • Echo Dot
        link
        fedilink
        arrow-up
        1
        ·
        5 months ago

        If you own the network you can prevent anything you want.