• sosodev@lemmy.world
    link
    fedilink
    English
    arrow-up
    56
    ·
    11 months ago

    It sounds like the model is overfitting the training data. They say it scored 100% on the testing set of data which almost always indicates that the model has learned how to ace the training set but flops in the real world.

    I think we shouldn’t put much weight behind this news article. This is just more overblown hype for the sake of clicks.

    • LostXOR@kbin.social
      link
      fedilink
      arrow-up
      8
      ·
      11 months ago

      The article says they kept 15% of the data for testing, so it’s not overfitting. I’m still skeptical though.

        • feedmecontent@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 months ago

          Number of parameters required to overfit increases as the amount of data increases. Overfitting basically turns the overfit model into an encoding of the data (and I think has been applied in this way?)