I’ve tried using an LLM for coding - specifically Copilot for vscode. About 4 out of 10 times it will accurately generate code - which means I spend more time troubleshooting, correcting, and validating what it generates instead of actually writing code.
I find it most useful as a means of getting answers for stuff that have poor documentation. A couple weeks ago chatgpt gave me an answer whose keyword had no matches on Google at all. No idea where it took that from (probably some private codebase), but it worked.
I’m glad you had some independent way to verify that it was correct. Because I’ve asked it stuff Google doesn’t know, and it just invents plausible but wrong answers.
Like all tools, it is good for some things and not others.
“Make me an OS to replace Windows” is going to fail “Tell me the terminal command to rename a file” will succeed.
It’s up to the user to apply the tool in a way that it is useful. A person simply saying ‘My hammer is terrible at making screw holes’ doesn’t mean that the hammer is a bad tool, it tells you the user is an idiot.
I’ve tried using an LLM for coding - specifically Copilot for vscode. About 4 out of 10 times it will accurately generate code - which means I spend more time troubleshooting, correcting, and validating what it generates instead of actually writing code.
I feel like it’s not that bad if you use it for small things, like single lines instead of blocks of code, like a glorified auto complete.
Sometimes it’s nice to not use it though because it can feel distracting.
truly who could have predicted that a glorified autocomplete program is best at performing autocompletion
seriously the world needs to stop calling it “AI”, it IS just autocomplete!
I find it most useful as a means of getting answers for stuff that have poor documentation. A couple weeks ago chatgpt gave me an answer whose keyword had no matches on Google at all. No idea where it took that from (probably some private codebase), but it worked.
I’m glad you had some independent way to verify that it was correct. Because I’ve asked it stuff Google doesn’t know, and it just invents plausible but wrong answers.
Apparently Claude sonnet 3.7 is the best one for coding
I use it to construct regex’s which, for my use cases, can get quite complicated. It’s pretty good at doing that.
I like using gpt to generate powershell scripts, surprisingly its pretty good at that. It is a small task so unlikely to go off in the deepend.
Like all tools, it is good for some things and not others.
“Make me an OS to replace Windows” is going to fail “Tell me the terminal command to rename a file” will succeed.
It’s up to the user to apply the tool in a way that it is useful. A person simply saying ‘My hammer is terrible at making screw holes’ doesn’t mean that the hammer is a bad tool, it tells you the user is an idiot.