(despite david simon being a shitlib)

  • Chronicon [they/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    65
    ·
    edit-2
    5 months ago

    I’ve heard some people say they like it as an idea generator like this but I just can’t imagine being a self-respecting artist or professional writer and admitting that you use AI, even in that limited capacity. It’s like admitting you’ve given up and would rather rewrite some bland statistically average slop in your own words than find any other way to get past a little bit of writers block (like say, taking a break. Who would want to take a break ever? And risk reducing my productivity? heresy to the neolib striver)

    • SoyViking [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      29
      ·
      5 months ago

      And even if you use it as an idea generator, the ideas you get are mediocre boring ideas. You never get something unique and interesting.

    • save_vs_death [they/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      23
      ·
      5 months ago

      I remember trying it for some very basic TTRPG campaign prep and it gave me the most hokey by-the-numbers boring derivative shit I’ve ever seen. You have to keep in mind TTRPG writing is a genre of writing that mostly consists of ripping off books and movies you liked, anyway. And it couldn’t even reach that very low bar. Genuinely concerned idea scroungers never heard of hitting the “random” button on TVTropes.

      • KobaCumTribute [she/her]@hexbear.net
        link
        fedilink
        English
        arrow-up
        13
        ·
        5 months ago

        I remember trying it for some very basic TTRPG campaign prep

        When I was GMing I really liked GPT-2 for just churning out some nonsense to fill in unimportant details on the fly while just riffing on ideas with my players to build sessions. Like sometimes I’d have a good idea for a run, and other times I’d just ask the players what sort of run they want and workshop ideas with them till we got an idea we liked, then I’d (openly) get some stilted and bizarre blurbs from GPT-2 to give a little backstory and flavor to that.

        But that was also relying on how flawed and weird GPT-2 was and how well the absurdity of its gibbering meshed with the tone we were setting. I feel like if one were to try to use chatGPT for the same thing it would just be dull instead of producing entertainingly absurd nonsense.

      • novibe@lemmy.ml
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        5 months ago

        It was useful to generate names for me, and also go more in depth about geography.

        For the first I did things like: give me a list of fantasy names vaguely inspired by Ancient Greek/proto-IE/Sanskrit etc. Also based on characteristics, like “give me a name based in PIE that refers to black beard/holy land etc.”.

        For the second I did things like explain vaguely the geographical characteristics for an area, then ask if that is realistic and how the climate and biosphere would look like taking x, y or z into account.

        It worked… ok-ish.

        For the first it made up a lot of fake shit. I even went on some rabbit holes asking for sources and finding some, in like French from 50 years ago, or not finding it at all.

        For the second, I honestly don’t know. It was convincing? I don’t know enough about geography to really know tho.

    • NephewAlphaBravo [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      19
      ·
      5 months ago

      I imagine using it as an “idea generator” is more like a programmer’s rubber duck. It’s not actually giving you anything you’d use directly, it’s a way to think out loud and maybe jog your memory, or potentially connect ideas you hadn’t thought to connect.

    • NoisyOwl [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      14
      ·
      5 months ago

      It’s trash as an idea generator.

      The only useful thing I’ve gotten out of a (text) AI is asking it to guess functions of keyword mechanics in games. Like I was designing personality traits for AI leaders in a strategy game, and had a dozen bad candidates for “over produces defenses.” So I told ChatGPT to try to guess the meanings of bunkerist, hoxhaist, prepper, turtle, protectionist, survivalist, isolationist, guardian. Which did narrow it down to bunkerist, turtle, and protectionist (note that this is literally wrong in the case of protectionist). Normally I’d try to poll a bunch of random people for this sort of thing, and try to avoid anyone who’s trying to be clever. So it did save some work there.

      It won’t come up with anything useful going the other way around though (“list some possible names for traits of AI leaders in a strategy game”). Like I said, it doesn’t work as an idea generator.

      I guess in general it’s probably useful if you’re in a situation where you need to make sure your writing is very very clear. If ChatGPT can correctly summarize what you wrote, it’s probably safe for people who are distracted or bad at reading or whatever.

    • DamarcusArt@lemmygrad.ml
      link
      fedilink
      English
      arrow-up
      12
      ·
      5 months ago

      I’ve looked into using it to save time, the way this interviewer recommends, and I must say “I would rather put a gun in my mouth” is a pretty accurate response to how it makes you feel as a writer. The only thing it is good for is when I’m feeling incompetent, I plug my ideas into an AI and the garbage it generates makes me feel way better about my own writing skills.

      • Chronicon [they/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        4
        ·
        5 months ago

        The only thing it is good for is when I’m feeling incompetent, I plug my ideas into an AI and the garbage it generates makes me feel way better about my own writing skills.

        data-laughing

      • ChaosMaterialist [he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        3
        ·
        5 months ago

        I plug my ideas into an AI and the garbage it generates makes me feel way better about my own writing skills.

        Fuck it, I’m going to give AI the W here. You read it here first folks! Making people feel better by being terrible is the first legitimate good use of AI I have heard yet.

    • Rai@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      10
      ·
      5 months ago

      I like this place, I see it on /all/ often with some good stuff (I’ve never listened to the podcast… yet?) but this reads like a foreign language to me, or maybe I’m having a stroke?!

        • Rai@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          7
          ·
          5 months ago

          First of all, thank you.

          Second of all, BLADE RUNNER IS BASED ON ELECTRIC SHEEP?! I loved that book and have never seen Blade Runner!

          Added to my list, thanks again!

        • Rai@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          6
          ·
          5 months ago

          Also lawl, my last comment before this a little bit ago on another post was praising Electric Sheep. Weird coincidence.

          Also again: Electric Sheep was a dope screensaver thingy when I was a stoner-ass 20s bitty

  • Llituro [he/him, they/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    27
    ·
    5 months ago

    the only use of ai i think is probably remotely useful is programmers using it to help write new code. not people who aren’t experienced at software development mind you, they don’t get too much of chatgpt, but someone that knows what they’re doing with copilot to copy-paste someone’s completely correct implementation, that seems useful. at least to people i’ve talked to.

    • SoyViking [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      31
      ·
      5 months ago

      It is very useful for coding because that is one of the few places where unoriginal repetitive solutions are often desirable. But even with coding you have to know what to tell the LLM to do and you have to be able to read and understand the output to make sure it works as intended.

      LLM’s are a useful too for programmers to automate repetitive tasks but it is nowhere near bearing able to produce usable applications by itself. I am not worried that I’ll be replaced by a robot anytime soon.

      Those who should be worried about their jobs are people in places like customer support or government services directed at people who doesn’t matter to the ruling class. In these cases the powers that be have little holding them from replacing human interactions with significantly worse interactions with a LLM. Nobody important gives a shit if some schmuck can’t cancel their cable subscription or gets their employment benefits cut because the computer had a hiccup.

    • macerated_baby_presidents [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      22
      ·
      edit-2
      5 months ago

      IMO no, for two reasons:

      • reading code is harder than writing it. If the AI writes you a standard implementation, you still have to read it to make sure it’s correct. So that’s more work than just doing it yourself
      • AI will produce code that looks right. Since it can’t understand anything that’s all it does, next most likely token == most correct-looking solution. But when the obvious solution is not the right one, you now have deceptively incorrect code, specifically and solely designed to look correct.

      I’ve never used Copilot myself but pair programmed with someone who used it, and it seemed like he spent more time messing with the output than it would have taken to write it himself.

    • Cysioland@lemmygrad.ml
      link
      fedilink
      English
      arrow-up
      7
      ·
      5 months ago

      I use JetBrains “local LLM” thingy and it’s good at suggesting the very obvious, trivial code that I would write anyway, so it just saves me keystrokes

    • gaycomputeruser [she/her]@hexbear.net
      link
      fedilink
      English
      arrow-up
      5
      ·
      5 months ago

      It’s clearly become a crutch for some programmers. I remember talking to someone who does ai research and openly admitted that most of the people in their lab couldn’t code and that the outputs from chatgpt where sufficient to do their work.

  • FnordPrefect [comrade/them, he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    26
    ·
    5 months ago

    "You’re right, this is great! It’s never been so easy to make sure I’m not just throwing up stale “art by committee” tropes and drivel. What a time saver! Wait, you meant to actually use them? point-and-laugh-1point-and-laugh-2 "

  • DragonBallZinn [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    23
    ·
    5 months ago

    I always say with AI “don’t they have anything more important to automate?”

    If we are told that art is silly and only a lucky few can ever make a career out of it, then why is it that automating art is top priority?

  • DamarcusArt@lemmygrad.ml
    link
    fedilink
    English
    arrow-up
    11
    ·
    5 months ago

    I love that Shapiro gives an example of one of the things AI is worst at doing with creative writing. AI is terrible at linking two unrelated scenes together. All AI can really do with a script is pad it with samey nonsense, it can’t come up with a clever twist or a good segue.