• RickyRigatoni@lemmy.ml
    link
    fedilink
    arrow-up
    25
    arrow-down
    11
    ·
    1 year ago

    People genuinely think gpt is some sort of god machine pulling true and factual information out of the aether when it’s literally just fancy phone keyboard text prediction.

    • lightstream@lemmy.ml
      link
      fedilink
      arrow-up
      22
      arrow-down
      10
      ·
      1 year ago

      just fancy phone keyboard text prediction.

      …as if saying that somehow makes what chatGPT does trivial.

      This response, which I wouldn’t expect from anyone with true understanding of neural nets and machine learning, reminds me of the attempt in the 70s to make a computer control a robot arm to catch a ball. How hard could it be, given that computers at that time were already able to solve staggeringly complex equations? The answer was, of course, “fucking hard”.

      You’re never going to get coherent text from autocomplete and nor can it understand any arbitrary English phrase.

      ChatGPT does both those things. You can pose it any question you like in your own words and it will respond with a meaningful and often accurate response. What it can accomplish is truly remarkable, and I don’t get why anybody but the most boomer luddite feels this need to rubbish it.

      • ylai@lemmy.ml
        link
        fedilink
        arrow-up
        6
        ·
        edit-2
        1 year ago

        …as if saying that somehow makes what chatGPT does trivial.

        That is moving the goalpost. @RickyRigatoni is quite correct that the structure of an autoregressive LLM like (Chat)GPT is, well, autoregressive, i.e. to predict the next word. It is not a statement about triviality until you shifted the goalpost.

        What was genuinely lost in the conversation was how the loss function of a LLM is not the truthfulness. The loss function is for the most part, as you noted below, “coherence,” or that it could have been a plausible completion of the text. Only with RLHF there is some weak guidance on truthfulness, which is far meager than the training loss for pure plausibility.

        You’re never going to get coherent text from autocomplete and nor can it understand any arbitrary English phrase.

        Because those are small models. GPT-3 was already trained on the equivalent text volume that would required > 100 years reading by a human, which is a good size to generate the statistical model, but ridiculous for any sign of “intelligence” or “knowing” what is correct.

        Also, “coherence” is not the goal of normal autocomplete for input, which is scored by producing each next word ranked by frequency, and not playing “the long game” in reaching coherence (e.g. involving a few rare words to get the text flow going). Though both are autoregressive, the training losses are absolutely not the same.

        And if you had not veered off-topic with your 1970s reference from text generation, you might know that the Turing test was demonstratively passable even without neural networks back then, let alone plausible text generation:

        https://en.wikipedia.org/wiki/PARRY

        • hglman@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          1 year ago

          The original comment is dismissive and clearly ment to be trivializing of the capacity of LLMs. You’re the one being dishonest in your response.

          Your whole post, and a large class of arguments about the capacity of these systems rest on it is designed to do something, so therefore it cannot be more than that. That is not a valid conclusion, emergent behavior exists. Is that the case here? Maybe. Does that mean LLMs are alive or something if they display emergent behavior, no.

          • ylai@lemmy.ml
            link
            fedilink
            arrow-up
            2
            ·
            edit-2
            1 year ago

            The original comment is dismissive and clearly ment to be trivializing of the capacity of LLMs.

            The trivializing is clearly your personal interpretation. In my response, I was even careful to delineate the arguments between autogressive LLM vs. training for plausibility or truthfulness.

            You’re the one being dishonest in your response. Your whole post, and a large class of arguments about the capacity of these systems rest on it is designed to do something

            My “whole post” is evidently not all about capacity. I had five paragraphs, only a single one discussed model capacity, vs. two for instance about the loss functions. So who is being “dishonest” here?

            […] emergent behavior exists. Is that the case here? Maybe.

            So you have zero proof but still happily conjecture that “emergent behavior” — which you do not care to elaborate how you want to prove — exists. How unsurprising.

            “Emergent behavior” is a worthless claim if the company that trains the model is now even being secretive what training sample was used. Moreover, it became known through research that OpenAI is nowadays basically overtraining straight away on — notably copyrighted, explaining why OpenAI is being secretive — books to make their LLM sound “smart.”

            https://www.theregister.com/2023/05/03/openai_chatgpt_copyright/

            • hglman@lemmy.ml
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              1 year ago

              The existence of emergent behavior is irrelevant; judgment based on your views about how its made will be flawed. It is not a basis for scientific analysis. Only evidence and observation are.

    • bomberesque1@lemm.ee
      link
      fedilink
      arrow-up
      5
      ·
      1 year ago

      I just asked it (3.5) to list counties by what side of the road they drive on and by population

      It got Bangladesh, India and Indonesia wrong and put Pakistan on both lists

      I do think it could be the future of search but it’s obviously got a way to go with regards to error checking if it wants to be

      • keepthepace@slrpnk.net
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        GPT-4 is usually much, much better.

        Also, you have to keep in mind that asking it this way relies on its information storage mechanism inside its neural net, which is really not optimal. For many things, it is better to try get it to generate a program that does the task rather than extract information from it.

        Unfortunately they removed for now its ability to access web page, but at that moment, asking it to check on wikipedia which side of the road you drive in each country would have worked much better.

      • hglman@lemmy.ml
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        It’s much more impressive as a tool when you ask it to synthesize new text rather than awnser facts. It can produce a lot of text, either prose or code, that has a useful shape to edit for a final result. It can shift tasks from a generative problem to a editorial one.

  • spleenfiesta@feddit.de
    link
    fedilink
    Deutsch
    arrow-up
    13
    ·
    1 year ago

    A few months ago I was in a conversation with a couple buddies of mine, they talked about how chat GPT was going to end so many jobs including software engineering, which is my field. It was astonishing hearing people outside of my field tell me so confidently that chat GPT was going to completely replace software engineers. I played around with it and it can get some things right but it can also confidently spit out incorrect answers. I think as a tool for an engineer it would be good but not as a complete replacement.

    • keepthepace@slrpnk.net
      link
      fedilink
      arrow-up
      7
      arrow-down
      1
      ·
      1 year ago

      As a software engineer, though, I would not recommend to anyone to learn webdev right now.

      I do know a bit of webdev but really don’t enjoy it. I have used GPT-4 to produce in one hour what would take me 2 days to make. It is like having a very motivated and fast-typing intern.

      It won’t end our job right now in that state, but I think it would be irresponsible to not be at least a bit worried about the 2-3 years span.

      Actually personally I would be very surprised if we don’t have more people writing coding prompts than actual code by the end of 2025.

  • keepthepace@slrpnk.net
    link
    fedilink
    arrow-up
    12
    arrow-down
    1
    ·
    edit-2
    1 year ago

    GPT-4 generate real, working, code in many cases including non trivial ones.

    It still requires an engineer to proofread and just generally to prompt the system correctly.

    Yet, this is like having Magneto in the real world but people thinking it is some kind of tricks and still going through demolition firms… who then hire Magneto.

    But as an engineer in the field, I won’t complain.

    • Nalivai@discuss.tchncs.de
      link
      fedilink
      arrow-up
      2
      arrow-down
      2
      ·
      1 year ago

      It hallucinates as much as the previous version, but now it does it even more convincingly. Your Magneto secretly hires five drunken guys with a barrel of dynamite, tells them different adresses and instructs them to go nuts. They’re drunk af so most of the time they do some random shit, sometimes something resembling what you want them to do, and then your Magneto turns to you and tells you that job is done, but you trust that he is a real superhero so you don’t double check. He does the same even if you ask him to move your car or take care of your dog.

  • Hanabie@sh.itjust.works
    link
    fedilink
    arrow-up
    6
    ·
    1 year ago

    I’d agree with this, had AI not already begun to take over jobs. But it has, and developments are in motion in other fields.

    Thankfully, there’s also a motion towards four day workweeks and talk of UBI in places, and some jobs won’t just disappear but transformed.

    • pjhenry1216@kbin.social
      link
      fedilink
      arrow-up
      9
      arrow-down
      5
      ·
      1 year ago

      It’s not replacing jobs well. It’s being exploited because the subpar work is passable. As long as it’s not in a monopoly industry, real humans will always outdo the cheap knockoff services.

      CharGPT is not AI. AI has been bastardized and is being used incorrectly. If someone is selling a service and use the term “AI”, do not trust them.

      AI doesn’t exist.

      LLMs are not the same thing as AI.

      LLMs cannot create anything new.

      They are also confidently incorrect all the time because they hold no concept or context of the situation. It’s just predicting words that are most likely to follow the prompts you give it based on all the combination of words it “knows”. The issue here is that it won’t know if it’s answering incorrectly or not. It’ll be confident regardless.

      AI will be exploited. Just as the cloud was exploited when companies thought it meant they didn’t need IT staff anymore. Admins are still needed.

      • Hanabie@sh.itjust.works
        link
        fedilink
        arrow-up
        7
        ·
        edit-2
        1 year ago

        Let’s skip over all the linguistic quirks here really quick and focus on the heart of the matter.

        AI is good enough at certain things to change whole job sectors. Whether that’s good or not is not something I can even discuss without making assumptions based on lacking data. What it realistically means, though, is that certain jobs are being transformed, while others are becoming superfluous. How we as society deal with this is one of the challenges in the coming years.

        AI has been making huge strides in the recent years, months, even weeks. Heck, you can’t spend a week in the woods without missing some big news. The challenge is to adapt to the new tools without making unequal wealth distribution even worse than it is already.

        That doesn’t change the fact that certain jobs will change, like translation turning more info editing work, or coding into designing, and some older folks, who find it difficult to adapt, will go under.

        It’s happened before multiple times, for example with the industrial revolution.

      • SirGolan@lemmy.sdf.org
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        1 year ago

        I think you’re conflating AGI (artificial general intelligence) with AI here amongst other misconceptions.

        Yes, transformer LLMs are trained to predict the next word, but larger ones (like GPT3) exhibit emergent abilities that nobody really predicted.

        I’m curious what you think something new might be. I had GPT4 write a whole bunch of code lately to fit into existing systems I created. I guarantee no systems like that were in its training data because it’s a system that deals with GPT4 and LLM functionality that didn’t exist when the training data was collected. One of my first experiments with GPT3 was an app that could make video game pitches. I can guarantee some of the weird things my team made with that were new ideas.

        Does it really understand anything? Who knows. Does it matter if it can act like it does? See also the Chinese room experiment.

        • pjhenry1216@kbin.social
          link
          fedilink
          arrow-up
          4
          arrow-down
          1
          ·
          1 year ago

          Coding is a poor example. It’s a language. It’s simply translating from one language (pseudocode) to another (the programming language you requested). As long as you give it clear instructions, it’s not “solving” anything. It’s like saying Google translate created something new because you asked it to translate a sentence no one has asked before.

          Honestly, I don’t think there’s as significant “emergent” capabilities beyond it just being better at performing than they expected.

          • SirGolan@lemmy.sdf.org
            link
            fedilink
            arrow-up
            3
            ·
            edit-2
            1 year ago

            I suppose that’s my bad for the article I linked which doesn’t really go into specifics on what the capabilities are. One of the big ones is tool use. You can give it a task and a list of tools to use and it can use the tools to compete the task. this capability alone makes a huge amount of automations possible that weren’t possible before LLMs.

            I’m getting the impression your definition of “new” is “something only a human could come up with.” Please correct me if I’m wrong here. People who create completely novel things are few and far between. They’re typically the ones remembered for centuries. Though honestly, even then they’re usually standing on the shoulders of those before them. Just like what AI does. Look at AlphaFold, an AI that is rapidly accellerating disease research and solving many other hard problems.

            Anyway, if I can prompt the AI to write code for me and even if you don’t count that as something new, it’s a force multiplier on my job, which is a huge benefit. As Hanabie said, there’s going to be a lot of changes in jobs due to AI and those who don’t adapt are going to be left behind. I’m commenting here in hopes of helping people see that and not get left behind.

            • pjhenry1216@kbin.social
              link
              fedilink
              arrow-up
              2
              ·
              1 year ago

              You realize you essentially just argued my point though. That’s basically my analogy with the cloud. It’s not replacing anything. I could have been clearer I suppose, but the crux of it is that it’s not replacement.

              • SirGolan@lemmy.sdf.org
                link
                fedilink
                arrow-up
                2
                ·
                1 year ago

                Oh hmm. Are you just saying that it can’t fully replace people at jobs? Because I generally do agree with that at least with current models and methods of using them. It’s getting close though, and I think within a year or two we will be there for at least a bunch of professions. But on the other hand, if it makes workers in some jobs 2x more productive then the company only needs to keep half of those workers to maintain the same output. I think this is where it’s going to start / has already started.

  • Fredselfish @lemmy.ml
    link
    fedilink
    arrow-up
    5
    arrow-down
    3
    ·
    1 year ago

    Yep the longer you talk to IT the dumber it becomes. So definitely not replacing any jobs any time soon.

    • Kuvwert@lemm.ee
      link
      fedilink
      arrow-up
      6
      ·
      1 year ago

      It’s a technology that is less than a year old in the public domain. It’s in the iPhone 1 stage. It’ll get better, and it’ll replace most low skill jobs.

      • OhNoMoreLemmy@lemmy.ml
        link
        fedilink
        arrow-up
        11
        ·
        edit-2
        1 year ago

        Ironically most low skilled jobs are things that aren’t going to be replaced for a long time.

        Jobs like shelf stacker, bag checker, or sweeping up on a building site are super fiddly and involve a mix of interacting with people and dealing with an environment that’s being constantly changed.

        On the other hand, anything that involves writing and doesn’t need to be accurate or compelling is already at risk. BuzzFeed should be very afraid.

        • pjhenry1216@kbin.social
          link
          fedilink
          arrow-up
          7
          ·
          1 year ago

          BuzzFeed writers should be afraid. BuzzFeed owners are fine.

          The interesting part is many executives are more easily replaced by AI than many lower jobs. CEOs should be more afraid shareholders will want one of those instead. Right now liability questions are probably the only thing protecting them. Shareholders never want to blame themselves.

          • OhNoMoreLemmy@lemmy.ml
            link
            fedilink
            arrow-up
            4
            ·
            edit-2
            1 year ago

            We’ll see. Chatgpt is too much of a magic 8-ball to be making decisions.

            There’s nothing to say it’s going to be accurate, and if you don’t like the answer you can just repeat the question with a new phrasing until you get an answer you like.

            Sometimes it seems like CEOs aren’t much better, but in principal they could be.

          • SirGolan@lemmy.sdf.org
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            Gotta say I agree on this one. A company run by an AI CEO might do really well in some industries. There’s actually a replacement for the Turing test that’s been proposed where the AI is given $100k investment and if it can start a company and turn that into $1,000,000, it passes the test. I don’t think that’s a great Turing test but I do think we are pretty close to being able to do that for some types of companies (like dropshipping on Amazon).