I don’t get this. AI bros talk about how “in the near future” no one will “need” to be a writer, a filmmaker or a musician anymore, as you’ll be able to generate your own media with your own parameters and preferences on the fly. This, to me, feels like such an insane opinion. How can someone not value the ingenuity and creativity behind a work of art? Do these people not see or feel the human behind it all? And are these really opinions that you’ve encountered outside of the internet?

  • slowcakes@programming.dev
    link
    fedilink
    arrow-up
    2
    ·
    2 days ago

    Art is subjective, AI is a buzzword, if statements are considered AI, especially in the gaming world.

    And the current state of LLMs and what are the smartest and brightest in the industry have only managed to produce utter trash, while sacrificing the planet and its inhabitants. I like your daughter more, she will create more value and at the same time not be a total corporate tool, ruining the planet for generations to come, mad respect.

    (not calling you a tool, but people who work with LLMs)

    • canadaduane@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      I do work with LLMs, and I respect your opinion. I suspect if we could meet and chat for an hour, we’d understand each other better.

      But despite the bad, I also see a great deal of good that can come from LLMs, and AI in general. I appreciated what Sal Khan (Khan Academy) had to say about the big picture view:

      There’s folks who take a more pessimistic view of AI, they say this is scary, there’s all these dystopian scenarios, we maybe want to slow down, we want to pause. On the other side, there are the more optimistic folks that say, well, we’ve gone through inflection points before, we’ve gone through the Industrial Revolution. It was scary, but it all kind of worked out.

      And what I’d argue right now is I don’t think this is like a flip of a coin or this is something where we’ll just have to, like, wait and see which way it turns out. I think everyone here and beyond, we are active participants in this decision. I’m pretty convinced that the first line of reasoning is actually almost a self-fulfilling prophecy, that if we act with fear and if we say, “Hey, we’ve just got to stop doing this stuff,” what’s really going to happen is the rule followers might pause, might slow down, but the rule breakers–as Alexander [Wang] mentioned–the totalitarian governments, the criminal organizations, they’re only going to accelerate. And that leads to what I am pretty convinced is the dystopian state, which is the good actors have worse AIs than the bad actors.

      https://www.ted.com/talks/sal_khan_how_ai_could_save_not_destroy_education?subtitle=en