I’m a dev. I’ve been for a while. My boss does a lot technology watch. He brings in a lot of cool ideas and information. He’s down to earth. Cool guy. I like him, but he’s now convinced that AI LLMs are about to swallow the world and the pressure to inject this stuff everywhere in our org is driving me nuts.

I enjoy every part of making software, from discussing with the clients and the future users to coding to deployment. I am NOT excited at the prospect of transitioning from designing an architecture and coding it to ChatGPT prompting. This sort of black box magic irks me to no end. Nobody understands it! I don’t want to read yet another article about how an AI enthusiast is baffled at how good an LLM is at coding. Why are they baffled? They have “AI” twelves times in their bio! If they don’t understand it who does?!

I’ve based twenty years of career on being attentive, inquisitive, creative and thorough. By now, in-depth understanding of my tools and more importantly of my work is basically an urge.

Maybe I’m just feeling threatened, or turning into “old man yells at cloud”. If you ask me I’m mostly worried about my field becoming uninteresting. Anyways, that was the rant. TGIF, tomorrow I touch grass.

  • MagicShel@programming.dev
    link
    fedilink
    arrow-up
    18
    ·
    1 year ago

    Having an AI help you code is like having a junior developer who is blazing fast, enthusiastic and listens well. However it doesn’t think about what it writes. It does no testing and it doesn’t understand the big picture at all. For very simple tasks, it gets the job done very fast, but for complex tasks no matter how many times you explain it. It is never going to get it. I don’t think there’s any worry about AI replacing developers any time in the foreseeable future.

    • shadowolf@lemmy.ca
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      in fairness… this is more a limitation of the current technology. your look at gpt4 and going not an expert. but what about gpt5 or 6… or some of the newer ideas like microsoft plan for 1 million token model using attention dialation mechnism. The point being we are still on the ground floor. And these models have emgerent functionality

    • hallettj@beehaw.org
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      Lol this is what I was thinking too. The junior dev is also a black box. AI automation seems more like delegating than programming to me.

    • mkhoury@lemmy.ca
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      But you can work with it to write all the tests/acceptance criteria and then have the AI run the code against the tests. We spent a lot of time developing processes for humans writing code, we need to continue integrating the machines into these processes. It might not do 100% of the work you’re currently doing, but it could do maybe 50% reliably. That’s still pretty disruptive!

    • Naate@beehaw.org
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      This is a pretty apt analogy, I think.

      We’ve been using copilot at work, and it’s really surprised me with some slick suggestions that “mostly work”. But I don’t think it could have written anything beyond the boilerplate my team has done.

      (I also spend way too much time watching Copilot and Intellisense fight, and it pisses me off to no end.)

  • argv_minus_one@beehaw.org
    link
    fedilink
    arrow-up
    18
    ·
    1 year ago

    This sort of black box magic irks me to no end. Nobody understands it!

    And that’s why it’s not going to swallow the world. It’s a toy, not a tool.

    Tools behave consistently and predictably. You know what you call a tool that doesn’t behave consistently and predictably? “Broken”, that’s what.

    Maybe I’m just feeling threatened, or turning into “old man yells at cloud”.

    I was “old man yells at cloud” about cryptocurrency for years, and now it’s dead, exactly as I predicted.

    Sometimes, the old man is right.

  • winterstillness@beehaw.org
    link
    fedilink
    arrow-up
    10
    ·
    1 year ago

    This is based on someone else’s reply I read once. Developers have been trying to put themselves out of their own jobs since the beginning. Automating/scripting things, creating tools, IDEs, etc.

    Development is so much more than generating/writing boilerplate code. Code plays such a small role as opposed to figuring out how to solve a problem or even figuring out what the problem is in the first place.

    I spent several days figuring out why an HTTP POST in prod wasn’t working. But an identical one was working locally. Turns out there was an application server change that deceased the max request param size. The Dockerfile was configured so that the patch version (semver) was updated automatically. This was a super interesting challenge (felt like Sherlock Holmes with this one).

    Try having ChatGPT/etc. figure that one out.

    All of this hubbub might produce some kind of toolset that could augment what we already do (i.e. IDE). But replacing people entirely? I don’t think so.

    • irongamer@beehaw.org
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      1 year ago

      Developers have been trying to put themselves out of their own jobs since the beginning. Automating/scripting things, creating tools, IDEs, etc.

      As a developer I always thought this was sort of the point. If the mostly automated system doesn’t require less maintenance, make life easier for the user(s), or require fewer humans, I’m doing something wrong. Always feels a little bit like undermining your position, but when things do break you are also the person most likely to know the fix and fix it quickly.

  • Manticore@beehaw.org
    link
    fedilink
    arrow-up
    8
    ·
    1 year ago

    AI can code assist; it’s quite helpful for that. Predictive text, learning a less familiar language, converting pseudo, etc.

    But it couldn’t possibly replace senior developers long-term. It just looks new and exciting, especially to people who don’t truly understand how it works. We still need to have human developers capable of writing their own new code.

    1. AI is entirely derivative, it’s just copying the human devs of yester-year. If AI does the majority of coding then it becomes incapable of learning, thus necessitating human coders anyway. It also is only going to generate solutions to broad-strokes problems that it already has in its dataset, or convert pseudocode into functional code (which still requires a dev know enough to write pseudo).

    2. It also currently has no way of validating what it writes. It’s trying to replicate what our writing looks like contextually, it doesn’t comprehend it. If it ever starts training on itself as it ages, it will stagnate and require human review, which means needing humans that understand code. And that’s not including the poor practices it will already have because so many devs are inconsistent about things like writing comments, documentation, or unit testing. AI doesn’t have its own bias but it inevitably learns to imitate ours.

    3. And what about bug-testing? When the AI writes something that breaks, who do you ask for help? The AI doesn’t comprehend the context of the code its reading if you paste it back, it doesn’t remember writing it. You need people who understand how the code works to be able to recognise why it might be breaking.

    AI devs are the fast food of coding. It will never be as good quality as something from an experienced professional. But if you’re an awful cook, it still makes it fast and easy to get a sad, flat cheeseburger.

    I’ve worked with devs who are the equivalent of line cooks and are also producing sad, flat cheeseburgers: code of poor quality that still sees production because the client doesn’t know any better. IMO, those are the only devs that need to be concerned, because those are the ones that are easy to replace.

    If AI coding causes any problems within the job market for devs, it will be that it replaces graduate/junior developers so well that fewer devs get the mentoring or experience to become seniors, and the demand for seniors will rack up significantly. It seems more likely that developers will split into two separate specialisations, not that our single track will be replaced.

  • Admiral Patrick@dubvee.org
    link
    fedilink
    arrow-up
    7
    ·
    1 year ago

    Nope, I fully agree with you.

    These “AI” tools have no more understanding of what they crap out than a toddler who has learned to swear (someone else made that comparison, I’m just borrowing it).

    Have you ever done a code review with someone, asked about a specific part, and they say “I dunno; I copied it from GPT and it just seems to work” ? I have, and it’s absolutely infuriating. Yeah, they could have copied the same from Stack Overflow, I suppose, but I’d treat it the same. But somehow they expect copy/paste from an “AI” to get a pass?

    Even without dipping into the “replacement theory” of it, these kinds of tools just allow people who don’t know what they’re doing to pretend like they do while creating a mountain of technical debt. Even experienced devs who use it are starting down a slippery slope, IMO.

    • luciole (he/him)@beehaw.orgOP
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      The replacement theory brings up this weird conundrum where the LLMs need to consume human work to train themselves, the very work they seek to replace. So once the plan succeeds, how does it progress further?

    • gus@beehaw.org
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Yeah that’s one of the major issues I have with it. It gives people a way to take their responsibilities, delegate it to an AI, and wash their hands of the inevitable subpar result. Not even just in programming, I think over time we’re going to see more and more metrics replaced with AI scores and businesses escaping liability by blaming it on those AI decisions.

      Back in the realm of programming, I’m seeing more and more often people “saving time” by trying to use GPT to do the first 90% but then just not doing the last 90% at all that GPT couldn’t do.

      • argv_minus_one@beehaw.org
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        1 year ago

        Oh God, I can see it now. Someone makes an AI for filtering job applications, it’s great, all the employers use it. Before a human ever sees a resume, the AI decides whether to silently discard it. For reasons known to literally no one, the AI doesn’t like your name and always discards your resume, no matter how many times you change it. Everybody uses the same AI, so you never get a job again. You end up on the street, penniless, through no fault of your own.

          • argv_minus_one@beehaw.org
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            Yes, and it’ll eventually be worked out to the point that it’s mostly accurate, but there will always be edge cases like the one I described above; they’ll just be rare enough that nobody cares or even believes that it’s happening.

            Now, humans reviewing job applications are also subject to biases and will unfairly reject applicants, but that only shuts you out of one company. AIs, on the other hand, are exact copies of each other, so an AI that’s biased against you will shut you out of all companies.

            And, again, no one will care that this system has randomly decided to ruin your life.

  • ImportedReality@beehaw.org
    link
    fedilink
    arrow-up
    6
    ·
    1 year ago

    I see LLM AIs acting more as an assistant rather than being the primary contributor in software projects.

    For example, I’m starting a very ambitious personal project and wanted to practice writing a proper project plan and requirements document.

    I had no clue where to start, so I pulled up ChatGPT and after some prompting I now have workable rough-drafts that just need some fine details added in and I can focus on actual programming.

    • fuck_u_spez@lemmy.fmhy.ml
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      Yeah, I agree with most not so interesting code (which is surprisingly much, if I think about it, especially average frontend/backend apps, client side oriented boilerplaty code (say e.g. React UI…)).

      But coding a nice smart architecture, something novel/innovative (I think where the art of software engineering really lies IMHO)… well I’m not even thinking anymore about using AI (for now at least), it just confuses me, writes dumb code, and writing back and forth with it is cumbersome (to get better code), so that I just code it myself (being a fast typer is reallly helpful I think…). (I’m using it often though as some kind of StackOverflow replacement, but letting the AI code…? nah).

      I think it’ll likely take a few years still where I really seriously can/have to think about using AI productively in these cases (where it may even teach me a few things about language features I haven’t known yet)…

  • vampatori
    link
    fedilink
    arrow-up
    5
    ·
    1 year ago

    The issues with LLM’s for coding are numerous - they don’t produce good results in my experience, there’s plenty of articles on their flaws.

    But… they do highlight something very important that I think we as developers have been guilty of for decades… a large chunk of what we do is busy work; the model definitions, the api to wrap the model, the endpoint to expose the model, the client to connect to the endpoint, the ui that links to the client, the server-side validation, the client-side validation, etc. On and on… so much of it is just busy work. No wonder LLM’s can offer up solutions to these things so easily - we’ve all been re-inventing the wheel over and over and over again.

    Busy work is the worst and it played a big part in why I took a decade-long break from professional software development. But now I’m back running my own business and I’m spending significant time reducing busy work - for profit but also for my own personal enjoyment of doing the work.

    I have two primary high-level goals:

    1. Maximise reuse - As much as possible should be re-usable both within and between projects.
    2. Minimise definition - I should only use the minimum definition possible to provide the desired solution.

    When you look at projects with these in mind, you realise that so many “fundamentals” of software development are terrible and inherently lead to busy work.

    I’ll give a simple example… let’s say I have the following definition for a model of a simple blog:

    User:
      id: int generate primary-key
      name: string
    
    Post:
      id: int generate primary-key
      user_id: int foreign-key(User.id)
      title: string
      body: string
    

    Seems fairly straight-forward, we’ve all done this before - it can be in SQL, prisma, etc. But there’s some fundamental flaws right here:

    1. We’ve tightly coupled Post to User through the user_id field. That means Post is instantly far less reusable.
    2. We’ve forced an id scheme that might not be appropriate for different solutions - for example a blogging site with millions of bloggers with a distributed database backend may prefer bigint or even some form of UUID.
    3. This isn’t true for everything, but is for things like SQL, Prisma, etc. - we’ve defined the model in a data-definition language that doesn’t support many reusability features like importing, extending, mixins, overriding, etc.
    4. We’re going to have to define this model again in multiple places… our API that wraps the database, any clients that consume that API, any endpoints that serve that API up, in the UI, the validation, and so on.

    Now this is just a really simple, almost superficial example - but even then it highlights these problems.

    So I’m working on a “pattern” to help solve these kinds of problems, but with a reference implementation in TypeScript. Let’s look at the same example above in my reference implementation:

    export const user = new Entity({
        name: "User",
        fields: [
            new NameField(),
        ],
    });
    
    export const post = new Entity({
        name: "Post",
        fields: [
            new NameField("title", { maxLength: 100 }),
            new TextField("body"),
        ],
    });
    
    export const userPosts = new ContentCreator({
        name: "UserPosts",
        author: user,
        content: post,
    });
    
    export const blogSchema = new Schema({
        relationships: [
            userPosts,
        ],
    });
    

    So there’s several things to note:

    1. Entities are defined in isolation without coupling to each other.
    2. We have sane defaults, no need to specify an id field for each entity (though you can).
    3. You can’t see it here because of the above, but there are abstract id field definitions: IDField and AutoIDField. It’s the specific implementation of this schema where you specify the type of ID you want to use, e.g. IntField, BigIntField, UUIDField, etc.
    4. Relationships are defined separately and used to link together entities.
    5. Relationships can bestow meaning - the ContentCreator relationship just extends OneToMany, but adds meta-data from which we can infer things in our UI, authorization, etc.
    6. Fields can be extended to provide meaning and to abstract implementations - for example the NameField extends TextField, but adds meta-data so we know it’s the name of this entity, and that it’s unique, so we can therefore have UI that uses that for links to this entity, or use it for a slug, etc.
    7. Everything is a separately exported variable which can be imported into any project, extended, overridden, mixed in, etc.
    8. When defining the relationship we sane defaults are used so we don’t need to explicitly define the entity fields we’re using to make the link, though we can if we want.
    9. We don’t need to explicitly add both our entities and relationships to our schema (though we can) as we can infer the entities from the relationships.

    There is another layer beyond this, which is where you define an Application which then lets you specify code generation components that to do all the busy work for you, settings like the ID scheme you want to use, etc.

    It’s early days, I’m still refining things, and there is a ton of work yet to do - but I am now using it in anger on commercial projects and it’s saving me time - generating types/interfaces/classes, database definitions, api’s, end points, ui components, etc.

    But it’s less about this specific implementation and more about the core idea - can we maximise reuse and minimise what we need to define for a given solution?

    There’s so many things that come off the back of it - so much config that isn’t reusable (e.g. docker compose files), so many things that can be automatically determined based on data (e.g. database optimisations), so many things that can be abstracted (e.g. deployment/scaling strategies).

    So much busy work that needs to be eliminated, allowing us to give LLM’s a run for their money!

    • SebKra@feddit.de
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      Building abstractions and tools to reduce busy-work has been the goal of computer science since the moment we created assembly. The difficulty lies in finding methods that provide enough value for enough use-cases to outweigh the cost of learning, documenting, and maintaining them. Finding a solution that works for your narrow use-case is easy - every overly eager junior has done it. However, building solutions that truly advance CS takes time, effort, and many, many failures. I don’t mean to discourage you, but always be aware of the cost of your abstraction. Sometimes, the busy work is actually better.

      • vampatori
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I agree! It’s something I’ve been toying with in various guises for over 20 years now, it’s evolved a lot over that time - but now I run my own company I’ve decided to really put effort into implementing something usable that will save us time now and in the future.

        It’s not about advancing CS, far from it - it’s about actually applying the core tenets of CS in practice. Things like re-usability, low coupling/high cohesion, maintainability, well-tested/robust, domain-oriented design, etc. are all really well understood and regarded as “Good”. Yet significant amounts of our development processes, technologies, and languages we use don’t meet those criteria.

        We’ll see! I’m pleased with how it’s going, it’s already saving me time, so hopefully one day it’ll help others do the same.

    • dartos@reddthat.com
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      LLMs, for me, have been pretty good for simple things.

      “Write a bash script that runs this command in all directories with foo in the name” “Translate this function to JavaScript” “Translate this shader to glsl”

      You could do any of those things yourself, but it’s not fun and it takes time. I save hours with little prompts like these

  • chinpokomon@beehaw.org
    link
    fedilink
    arrow-up
    4
    ·
    1 year ago

    AI LLMs are the best rubber duck. Like a rubber duck, it’s probably not going to “solve” anything for you directly, but LLMs can be a great tool to unlock your potential.

      • chinpokomon@beehaw.org
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        If a human can access your public repo and read comments posted on public forums, are they stealing your code? LLMs are just aggregators of a great many resources and they aren’t doing anything more than a biological human can already do. The LLM can do so more efficiently than a biological human, while perhaps being more prone to error as it doesn’t completely understand why something is written the way it is. As such any current AI model is prone to signpost errors, but in my experience it has been very good at organizing the broader solution.

        I can give you two examples. I started trying to find out how a .Net API call was made. I was trying to implement a retry logic for a call, and I got the answer I asked. I then realized that the AI could do more for me. I asked it to write the routine for me and it suggested using a library which is well suited for that purpose. I asked that it rewrite it without using an external library and it spit it out. I could have written this completely from scratch, in fact I had already come up with something similar but I was missing the API call I was initially looking for. That said, the result actually had some parts I would have had to go back and add, so it saved me a lot of time doing something I already knew how to do.

        In a second case, I asked if to solve a problem which at its heart was a binary search. To validate that the answer was correct it would need to go one extra step, but to answer the question it wasn’t necessary to actually perform that last validation step. I was looking for the answer 10, but I got the AI to give me answers in the range of 9-11. It understand the basic concepts, but it still needs a biological human to validate what it generates.

        • argv_minus_one@beehaw.org
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          I’m talking about asking the AI “where’s the bug in this code” and pasting a snippet of code from my non-public repository.

  • u_tamtam@programming.dev
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    To cool down your boss, you can always tell him that he’s putting your company at great legal risk, there’s no reason to think that LLMs are not violating copyright laws and software licenses, and moreover the case might settle differently in different countries (in case you export your code).

  • known_unpleasures@feddit.de
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    I know what you mean! I just started in development recently and the amount of my colleagues (that I am supposed to learn from) who use ChatGPT for a lot more than they should is super annoying to me. I have a background in natural language processing but decided to go into software development because the programming was always more fun to me.

    Recently some of my colleagues were looking for a package for a specific framework to do a specific task. And they just asked ChatGPT which is just NOT A SEARCH ENGINE. It came up with something that wasn’t even close to what we needed, but somehow no one looked into it. I did a quick search, read some threads on reddit and found something a lot better in 5 minutes. Luckily the Project Manager listened to me, but it was honestly so weird, because I felt like I was somehow weird for suggesting to look at what other devs recommend instead of just going with what a language model suggests, that doesn’t even use recent data.

    • luciole (he/him)@beehaw.orgOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Exactly, the Internet is still a thing! “Oh but it’s cool for code snippets.” Have you heard of our Lord and Savior StackOverflow?

  • vcmj@programming.dev
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    1 year ago

    I think the part that annoys me the most is the hype around it, just like blockchain. People who don’t know any better claiming magic.

    We’ve had a few sequence specific architectures over the years. GRU, LSTM and now Transformers. They were all better than the last at the task of sequence specific transformations, and at least for the last one the specific task was language translation. We eventually figured out these guys have a bit of clairvoyance too, they could make accurate predictions based on past data, or at least accurate enough to bet on, and you can bet traders of various stripes have already made billions off that fact. I’ve even seen a transformer based weather model. It did OK, but transformers are better at language.

    And that’s all it is! ChatGPT is a Transformer in the predictive stance. It looks at a transcript of a conversation and thinks what a human is most likely to say next. It’s a very complex transformation of historical data. If you give it the exact same transcript, it gives the exact same answer. It is in the literally mathematically rigorous sense entirely incapable of an original thought. Any perceived sentience is a shadow of OpenAI’s army of annotators or the corpus it was trained on, and I have a hard time assigning sentience to tomorrow’s forecast, which may well have used similar technology. It’s just an ultra fancy search engine index.

    Anyways, that’s my rant done I guess. Call it a cynical engineer’s opinion. To be clear I think it’s a fantastic and useful technology, and it WILL change how we interact with machines. It can do fancy things with the combination of “shell” code driving it’s UI like multi-step “agents” or running code, and I actually hope OpenAI extends it far into the future, but I sincerely think any form of AGI will be something entirely different to LLMs, or at least they’ll only form a small part of it as an encoder/decoder for it’s thoughts.

    EDIT: Added some paragraph spacing. Sorry, went into a more broad AI rant rather than staying on topic about coding specifically lol

  • Kronusdark@beehaw.org
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    I think everyone in the software industry is feeling this right now. It was the same way with NFT’s, Blockchain, AR, Even back in the late 2000’s everyone wanted Social Media (crazies). This too shall pass.

    Maybe in 6 months, maybe in 2 years, we will hit a point where the limitations and use case are understood well enough and it will just be another tool in our belts and not the end all.

    My advice, just smile to your boss and say “sure let’s try it”. Learn something new and look forward to the next one.

  • LuckyCharmsNSoyMilk@beehaw.org
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    I’ve recently started using CodeWhisperer and I’ve found I use the creation aspect much less than I do auto-complete. That may change in the future, but right now it’s pretty nice starting to type something and VSCode completes what I was going to type.

  • samn@lemmy.ml
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    I agree with you - I think my main issue is that using LLMs to write code is a crutch, and over time the quality of software will decrease because it will be made by a program perfected to generate the most likely next word, rather than understanding what it’s doing. If anything, having a basic understanding of LLMs makes me trust them less.