cross-posted from: https://programming.dev/post/8121843
~n (@nblr@chaos.social) writes:
This is fine…
“We observed that participants who had access to the AI assistant were more likely to introduce security vulnerabilities for the majority of programming tasks, yet were also more likely to rate their insecure answers as secure compared to those in our control group.”
[Do Users Write More Insecure Code with AI Assistants?](https://arxiv.org/abs/2211.03622?
There are lots of services to facilitate it. Copilot is one of them.
Is it really helpful / does it save a lot of time? I’m the worlds #1 LLM hater (don’t trust it and think it’s lazy) but if it’s a very good tool I might have to come around
I haven’t been using it much, so I don’t know if I’m a good judge. But I see it as an oversized autosuggestion tool that sometimes feels like an annoying interuption but sometimes feels like it helped me mover faster without breaking my train of thought.
By “it”, I mean I’ve tried several different ways to have an integrated LLM assistant integrated into my dev environment, none of which I was initially satisfied with in terms of workflow. But that’s kinda true for every change I’ve made to my dev environment and workflows. It takes me a while to settle on anything new.
I recommend none in particular, but I recommend that you take time to at least check it out. They have potential.