Hope this isn’t a repeated submission. Funny how they’re trying to deflect blame after they tried to change the EULA post breach.

  • MimicJar@lemmy.world
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    5
    ·
    11 months ago

    I agree, by all accounts 23andMe didn’t do anything wrong, however could they have done more?

    For example the 14,000 compromised accounts.

    • Did they all login from the same location?
    • Did they all login around the same time?
    • Did they exhibit strange login behavior like always logged in from California, suddenly logged in from Europe?
    • Did these accounts, after logging in, perform actions that seemed automated?
    • Did these accounts access more data than the average user?

    In hindsight some of these questions might be easier to answer. It’s possible a company with even better security could have detected and shutdown these compromised accounts before they collected the data of millions of accounts. It’s also possible they did everything right.

    A full investigation makes sense.

    • Zoolander@lemmy.world
      link
      fedilink
      English
      arrow-up
      25
      arrow-down
      3
      ·
      11 months ago

      I already said they could have done more. They could have forced MFA.

      All the other bullet points were already addressed: they used a botnet that, combined with the “last login location” allowed them to use endpoints from the same country (and possibly even city) that matched that location over the course of several months. So, to put it simply - no, no, no, maybe but no way to tell, maybe but no way to tell.

      A full investigation makes sense but the OP is about 23andMe’s statement that the crux is users reusing passwords and not enabling MFA and they’re right about that. They could have done more but, even then, there’s no guarantee that someone with the right username/password combo could be detected.

      • EssentialCoffee@midwest.social
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        3
        ·
        11 months ago

        I’m not sure how much MFA would have mattered in this case.

        23andme login is an email address. Most MFAs seem to use email as an option these days. If they’re already reusing passwords, the bad actor already has a password to use for their emails that’s likely going to work for the accounts that were affected. Would it have brought it down? Sure, but doesn’t seem like it would’ve been the silver bullet that everyone thinks it is.

        • Zoolander@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          11 months ago

          It’s a big enough detractor to make it cumbersome. It’s not that easy to automate pulling an MFA code from an email when there are different providers involved and all that. The people that pulled this off pulled it off via a botnet and I would be very surprised if that botnet was able to recognize an MFA login and also login, get the code, enter it, and then proceed. It seems like more effort than it’s worth at that point.

    • Monument@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      2
      ·
      11 months ago

      Those are my questions, too. It boggles my mind that so many accounts didn’t seem to raise a red flag. Did 23&Me have any sort of suspicious behavior detection?

      And how did those breached accounts access that much data without it being observed as an obvious pattern?

      • douglasg14b@lemmy.world
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        2
        ·
        edit-2
        11 months ago

        If the accounts were logged into from geographically similar locations at normal volumes then it wouldn’t look too out of the ordinary.

        The part that would probably look suspicious would be the increase in traffic from data exfiltration. However, that would probably be a low priority alert for most engineering orgs.

        Even less likely when you have a bot network that is performing normal logins with limited data exfiltration over the course of multiple months to normalize any sort of monitoring and analytics. Rendering such alerting inert, since the data would appear normal.

        Setting up monitoring and analysis for user accounts and where they’re logging from and suspicious activity isn’t exactly easy. It’s so difficult that most companies tend to just defer to large players like Google and Microsoft to do this for them. And even if they had this setup which I imagine they already did it was defeated.

        • sudneo@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          11 months ago

          If the accounts were logged into from geographically similar locations at normal volumes then it wouldn’t look too out of the ordinary.

          I mean, device fingerprinting is used for this purpose. Then there is the geographic pattern, the IP reputation etc. Any difference -> ask MFA.

          It’s so difficult that most companies tend to just defer to large players like Google and Microsoft to do this for them.

          Cloudflare, Imperva, Akamai I believe all offer these services. These are some of the players who can help against this type of attack, plus of course in-house tools. If you decide to collect sensitive data, you should also provide appropriate security. If you don’t want to pay for services, force MFA at every login.