- cross-posted to:
- programmer_humor@programming.dev
- cross-posted to:
- programmer_humor@programming.dev
idk if it is serious or not, but it is what I saw in indeed newsletter today.
programming was never about how fast you could type. the person who wrote this knows nothing about the job.
And yet somehow the tech blogs and such always scream about developer productivity. Go faster. Go faster.
From what I’ve seen over the years, only mids care about finishing fast.
The guy who wrote this is an idiot, but he became so in a world where “LoC” is a metric – one that Goodhart would love, but alas.
This is honestly the road to hell and the ~good intentions in one.
the description is gold, everyone can find something wrong about it.
natural language is the new programming language
lol. Lmao.
Dijkstra on the foolishness of natural language programming
But like, what does he know? He wasn’t an AI-native vibe orchestrator.
All he made was some dinky algorithm. Google Bard could do that in three minutes flat smh.
Thx for sharing this . Really hope people read it.
See, Dijkstra was talking about people trying to create programs in natural language. He didn’t say not to use your natural language to hire someone else to make a formal program. This is people using natural language to hire an LLM to make a formal program, and asking LLMs is like asking people, so it’s Dijkstra-approved.

“English is the new programming language” would be more punchy
Amazed they didn’t ask for 5-10 years of experience in AI coding.
wait for it! PHD in vibe coding or relevant experience
Dude, if they want someone who is still using Sonnet 3.5 … that’s like punching your vibe code in on paper tape, these days.
“Senior” is implying exactly that, I thought…
eventually… lol
Spot security vulnerabilities instantly from a candidate that can’t actually write code.
Just ask the ai to make no failures. Just aks the Ai to eliminate all failures. Easy 10 000 dollar per year.
The real trick about vibe coding is that it’s like any other management skill - when your minions completely screw the pooch, you need to be able to step in and do it for them.
My managers are supposed to be skilled?
Supposed to be ≠ is be.
I need to hire someone to take this functional 15 lines code, and like make it 200 lines of unusable madness.
But fast! Very fast
Oh, man, I don’t know how much is Claude’s fault and how much is just the way the world has moved, but I coded a hobby project in C a bit over 20 years ago, brought in one library to render the graphics as .jpg files and the whole thing was like 300 lines of code.
Claude “modernized” it for me, and yeah, it shows on a browser as a PWA and it’s working correctly (this time, via Opus 4.6 - first time I tried with Sonnet 4.0 it couldn’t even make it work correcty) - but daaaaammn, there’s like 454 files in deps, 1.4GB in the rust target folder - maybe it’s just a rust thing?
Rust & cargo do more than just compile. For example, it basically has buit-in ccache.
It is also easier to split large libraries into multiple crates, though an average project still uses more libraries than an equivalent C project. I wouldn’t be surprised if the “AI” also pulled in more libraries than needed, or has unnecessary library features enabled. I’m pretty sure that a cargo plugin for pruning unused libraries was featured on the rust blog, as a featured third-party plugin for a cargo release.
Fucking idiots. I’m surrounded by idiots
We did it to ourselves. Developing mission-critical systems in scripting languages and always sacrificing quality for delivery. Fast and sloppy paid þe bills, but we were digging our own graves. Once industry became used to sloppy software, a relatively mild shift to even more crappy, but far cheaper and more immediate software was a no-brainer. Customers haave gotten used to shitty, buggy software. It doesn’t matter to þem who’s writing it.
The only way for us to not “do this to ourselves” is to form unions. Otherwise we aren’t driving the decisions on what is used and what’s prioritized at all.
Amen.
Safety critical (aerospace, medical, precious few other) industries have regulated quality, with moderate success. It’s far from perfect, farther from ideal, but it is providing some additional resource and schedule allocation to do the things that need doing to ensure the systems don’t screw up too badly, too often.
Am in automotive and there’s definitely some of that. Much more so than in other industries I’ve worked. With that said, it’s a losing battle against the value proposition of AI. We’re getting AI use mandated on us.
I’m in one of those others I mentioned (and I try not to reference my company online because of… reasons), and we’re getting strongly encouraged to “integrate AI in our daily workflows, where it makes sense” - not just coding, but coding is an obvious target. As a business we tend to change slowly, so this will be… interesting.
Sounds almost like we work for the same company. 😂 Perhaps they all lifted this statement from the same consultancy contractor.
I wrote an app for my wife and it was really sad watching her just fumble past bugs instead of pointing them out when I was literally watching over her shoulder to get feedback on what needed fixed. I had to tell her several times, “No, don’t just keep reloading. What’s wrong?” Like we’ve all been trained so hard to accept shitty software that even when I could fix stuff easily I know people are just passively accepting the bugs.
One of my junior devs was having trouble with a bug in an internally developed tool, apparently for weeks before I saw her struggling with it over her shoulder - it was a 5 minute fix, I hope I made it clear to her: speak up when something’s wrong - this 5 minute fix has cost you many hours already because you never told me you were having a problem.
Developing mission-critical systems in scripting languages
This is a wild take. If you’d come up in the 80s you’d be complaining about using C instead of hand-writing assembly.
In the 80s the hand written assembly was more reliable and performant than the C, at least on many of the compilers.
Even in 1990, I tried to launch a serious project in C++ on the IBM-PC, and the best available compiler was too buggy to use. It did fine for little demo apps, but by the time you wrote code for 2 weeks, you started hitting bugs - not in your code but in the compiler output… we had to fall back to C for the project. Even later, around 1994, we had two C compilers for 6811 work and one of them was garbage, I could hand write the assembly better and faster without even trying hard. The other one was pretty good, and by the late 1990s I stopped looking at C/C++ compilers’ assembly output because it was consistently better than I would write by hand.
There were already plenty of reliable compilers at least for the main architectures in use. Replace C with Fortran though if you prefer - complaining about python in mission critical software is a brain-dead take that belongs in the bin of history.
im curious if they have live “vibe coding” session during hiring process
They should…
SOTA
Me: I want SoaD!
Mom: we have SoaD at home
At home: SotA, featuring such hits as
Sorta poisonous
lo mein
Let someone else bring the bombs
This is probably serious. Sounds like what my manager does already.
vibe manager?
is there any other kind?
i hope so. this is clearly mentioned in job description
SOTA vibe coding
but…
you have to use Replit and Cursor
Middle manager ass setup
But they use curser and cloud (probably meaning claude as it is used in curser pro)
Isn’t claude code considered SOTA vibe coding right now?
And i understood it like you can choose what fancy tool you use. The vibe manager who generated this, probably, just told their LLM to use SOTA AI coding tools in their prompt for this job description.
The SOTA changes every couple weeks, but Claude’s been very dominant for a while, yeah. There’s currently a lot of hype around GPT-5.4, but even then there’s a caveat that Claude is still better at UI.
I just personally find Cursor to be pretty buggy. But I think the Replit mention is more of a tell that someone vibe codes but doesn’t actually code. It’s been advertised to people as a way to build end to end apps without any coding experience. And to be fair, they’ve done a good job of building on the past decade of work in the Typescript community to make an entire app end to end type safe and therefore checkable by the compiler. Convex has done something similar in a way that I prefer and in my experience LLMs are very good at working in Convex projects as well.
Really at the end of the day I was just being pithy. Kind of poking fun at how much of a moving target SOTA is.
WTF?! 😳
Its serious and this is going to become more and more normal.
My entire workflow has become more and more Agile Sprint TDD (but with agents) as I improve.
Literally setting up agents to yell at each other genuinely improves their output. I have created and harnessed the power of a very toxic robot work environment. My “manager” agent swears and yells at my dev agent. My code review agent swears and tells the dev agent and calls their code garbage and shit.
And the crazy thing is its working, the optimal way to genuinely prompt engineer these stupid robots is by swearing at them.
Its weird but it overrides their “maybe the human is wrong/mistaken” stuff they’ll fall back to if they run into an issue, and instead they’ll go “no Im probably being fucking stupid” and keep trying.
I create “sprint” markdown files that the “tech lead” agent converts into technical requirements, then I review that, then the manager+dev+tester agents execute on it.
You do, truly, end up focusing more on higher level abstract orchestration now.
Opus 4.6 is genuinely pretty decent at programming now if you give it a good backbone to build off of.
- LSP MCPs so it gets code feedback
- debugger MCPs so it can set debug breakpoints and inspect call stacks
- explicit whitelisting of CLI stuff it can do to prevent it from chasing rabbits down holes with the CLI and getting lost
- Test driven development to keep it on the rails
- Leveraging a “manager” orchestrating overhead agent to avoid context pollution
- designated reviewer agent that has a shit list of known common problems the agents make
- benchmark project to get heat traces of problem areas on the code (if you care about performance)
This sort of stuff can carry you really far in terms of improving the agent’s efficacy.
I am genuinely trying to keep up with things, but what I see is completely different from what you’ve been describing
-
My recent experience with launching a swarm (3-4 Claude opus agents) ended up with a fiasco: a simple task ate $15-20 Claude credits in less than ten minutes. Looks indeed like science fiction, but doesn’t produce anything
-
In my current role as a team lead, I had to review a lot of code and I do what I haven’t ever done: decline the whole PRs as they contain a lot of architectural changes that complexify the system in order to achieve the goal.
-
I write much less code with Claude code these days, mostly because I don’t trust it and have to recheck every single scenario. I trust junior engineer in our team more than I trust this instrument.
I am vastly prefering copilot over claude, using sonnet 4.5~4.6 for most tasks and then pulling out opus as “the big guns” for tougher stuff sonnet cant handle easy
Copilot is only costing me ~$28 a month, which gets me 1500 premium requests per month
If you set up your flows well, 1 premium request is an entire session, so Im only paying like 2.4 cents for 20 minutes of work
ate $15-20 Claude credits in less than ten minutes.
Lay off of MAX mode.
Also, if you’re paying API rates, look into the subscription options - I can’t burn the $200 subscription plan down much below 50% without pushing prompts into Claude every waking hour (unless I turn on MAX mode). At API rates? I can burn $50 in a few hours.
do what I haven’t ever done: decline the whole PRs as they contain a lot of architectural changes that complexify the system in order to achieve the goal.
If you’re accepting the first thing the agent gives you, you’re almost certainly “doing it wrong” - gate it before it goes down a bad rabbithole and redirect it, in writing, in architecture documents (which it can draft for you, and correct based on your guidance) - and when it ignores those architecture documents, which it will do when things get big and complex, break the architecture documents down into smaller chunks that apply to the various tasks at hand - yes, it can do this breakdown for you too and that’s another opprotunity for you to guide the process. I try to frame the output I get from AI in my mind as: usually about 80% correct / useful, and it’s my job to identify that other 20% (which, in reality, is getting a lot smaller lately), and beef up the specifications and descriptions of the job until it can get everything to an acceptable state.
I don’t trust it and have to recheck every single scenario. I trust junior engineer in our team more than I trust this instrument.
That would depend entirely on which junior engineer your are talking about, for me. I don’t trust Claude, either. But for the most part I have Claude check itself, at an appropriately granular level. If you’ve got more than 2000 lines of Claude’s code that doesn’t have good visibility into what its doing, why its doing it, and what the outputs should look like… you’re trusting it too much. But it can write that documentation and testing for you, you just have to review it - at an appropriate level. If you’re trying to do it line by line of code for a big project, maybe you should still be writing it yourself instead.
-
nah such narratives are mostly pushed by Ai companies (it is obvious they need to sell it as business tool not personal buddy). Of course some managers/companies are buying into this narrative, and it is also understandable bc idea sounds like panacea especially if sell it further to investors :) and we see whole circle of snake oil sales
It’s not a “narrative”; it’s their experience. I don’t have the same experience, but do have experience of myself and colleagues using LLM agents effectively and doing more work reviewing their output than writing lines of code. Some colleagues are pretty much AI boosters, but most are very aware of its limitations.
nah such narratives are mostly pushed by Ai companies
Someone’s personal experience is an AI company narrative now?
it always was. look at people trying to automate everything with help of ai bots . and before ai companies started pushing this none of these folks spoke about it ot tried to reach same goal with iftt or other tools that are here for decades.
Some people do stuff the ai is good for, simple tasks that have been done a lot online already.
I hate ai for coding, AI cannot work for me. I would never trust it to do anything
I don’t think you understand the words you’re using…
Someone said “this is how I managed to make this work,” provided detailed explanations of it, and you’re dismissing it as propaganda rather than testing it for yourself. That is an unbelievably stupid stance.
you are escalating it too fast taking it to personal level. I feel you are close to bring moms to this. So relax , let your ai buddy play with your parts. This chat is over.
Sorry you can’t handle someone telling you that what you’re saying doesn’t make sense. Hopefully someday you’ll grow up enough to have your words challenged.
Edit: Oh, lemmy.ml. That explains everything.
I don’t think a lot of people have a feel for the velocity of change… this time last year I evaluated the tools and they still felt like a waste of time for me. I looked again in August 2025 and things were… different. Not great, but you could see the potential, and the velocity of change. When Claude 4.6 dropped - whoa… not just code, it has been helping me draft plans for a new building (personal use) - I need to submit some paperwork to the county, they just hit me with a requirement for architectural elevation drawings, Claude is chewing on that problem for me right now, working from basic information about the roofline and a 2D floorplan. Oop - and it’s done, first pass took maybe 20 minutes, aaand… it’s not too bad, side elevations are quite good, I just need to remind it about the 6" roof overhangs. Front and rear are a little more funky looking, I’m guessing these will be ready after another couple of rounds of prompts, maybe 1 hour in total, as opposed to hiring an architect for the permit application… (now, will the county push back because I didn’t hire an architect? I sincerely hope not, they said photos or drawings - how am I supposed to get photos of a building that hasn’t been built yet?)
Update: 4 hours of refinement later, I have 4 elevation drawings ready to submit… it would have taken 4 hours to select and engage an architect and meet with them to describe the project and collect their work - and that would have been spread over a week or more, instead of done in one evening.
Opus 4.6 is genuinely pretty decent at programming now if you give it a good backbone to build off of.
Soup from a Stone.
Opus 4.6 is genuinely pretty decent at programming now if you give it a good backbone to build off of.
Soup from a Stone.
To an extent, yes. The more “broth base” I feed Claude, the better it does. If I just vaguely describe a program, I get a vague implementation of my description. If I have a big, feature rich example (or better, examples) of what I want the program to do, Claude can iterate until the program it make’s output actually matches the examples.
Dude this boils down to “moving a hundred people is simple, I am a trained pilot and I used this 747 to move them”
Like great, you have the thousands of hours of training time required to understand a machine of that complexity and produce results.
Joe dirt has 8000 hours in his puddle jumper, and that’s the majority of the people these 747s are being foisted upon. They know how to fly, and they provide that service reliably.
Telling them to move 5 people with a machine they don’t need the volume or distance of, is irresponsible.
I’m not sure if I’m reading your intent correctly or not, but the AI agents actually excel at “puddle jumper” tasks. Stupid stuff that you could write a one-off script for, but damn that’s a lot of hassle. This afternoon a colleague and I were putting together a powerpoint slide deck based on a folder full of disorganized garbage. Claude digested the garbage and wrote a python script that wrote the .pdf and I’d swear it took almost as long to open Office 365 Power Point as it did for Claude to write the script that wrote the 7 slide .ppt file.
I get that it’s convenient
I’m saying it’s unsafe to use if you don’t already completely understand the output
What I have found: all that stuff that was evolving over the last 30 years: roadmap definition, sprint planning, unit tests, regular independent code reviews, etc. etc. etc. that those of us who “knew what we were doing” mostly looked down on as the waste of time that it was (for us), well… now you’ve got these tools that spew out 6 man-months of code in a few hours, and all those time-wasting code quality improvement / development management techniques… yeah, they apply, in spades. If you do all that stuff, and iterate at each quality gate until you’ve got what you’re supposed to have before proceeding, those tools actually can produce quality code - and starting around Opus 4.6 I’m not feeling the sort of complexity ceiling that I was feeling with its predecessors.
Transparency is key. Your code should provide insights to how it is running, insights the agent can understand (log files) insights you can understand (graphs and images, where applicable), if it’s just a mystery box it’s unlikely to ever do anything complex successfully, but if it’s a collection of highly visible white boxes in a nice logical hiearchical structure - Opus 4.6 can do that.
Unit tests seem to be well worth the extra time invested - though they do slow down progress significantly, they’re faster than recovering from off-rails adventures.
Independent reviewer agents (a clear context window, at a minimum) are a must.
If your agent can exercise the code on the target system, and read all the system log files as well as the log files it generates, that helps tremendously.
My latest “vibe tool” is the roadmap. It used to be “the plan” - but now the roadmap lays out where a series of plans will be deployed. As the agent works through a plan, each stage of the plan seems to get a to-do list… Six months ago, it was just to-do lists, and agents like Sonnet 3.5 would sometimes get lost in those. Including documentation, both developer facing architecture and specifications (for the tests), and user facing, and including updating of the documentation along with removal of technical debt in the code at the end of each roadmap plan stage also slows things down, and keeps development on track much better than just “going for delivery.” So, instead of 6 months of output in a day, maybe we’re making 2 months of progress, in a day, and generating about 10x the tests and documentation as we would have in those 2 months traditionally - in a day of “realtime” with the tool. 40:1 speedup, buried under 500:1 volume of documents created.
roadmap definition, sprint planning, unit tests, regular independent code reviews, etc. etc. etc. that those of us who “knew what we were doing” mostly looked down on as the waste of time that it was
You sound insane.
Not really, for humans a lot of this stuff feels like busywork that sorta helps for certain scales of work, but often times managers went WAY too hard on it and you end up with a 2 dev team that spends like 60% of their time in meetings instead of… developing.
But this changes a lot with AI Agents, because these tools that help reign in developers REALLY help reign in agents, it feels… like a surprising good fit
And I think the big reason why is you wanna treat AI Agents as junior devs, capable, fast, but very prone to errors and getting sidetracked
So you put these sorts of steering and guard rails in and it REALLY goes far towards channelling their… enthusiasm in a meaningful direction.
Do you write your tests in meetings? Do you do code reviews in meetings?
Do you think testing and reviewing code was a waste of time before “AI”?
What the fuck are you talking about, thats not what the poster said, you’ve done weird contorting of what they said to arrive at the question you are asking now.
While some tests make sense, I would say about 99% of tests that I see developers write are indeed a waste of time, a shit tonne of devs effectively are writing code that boils down to
Assert.That(2, Is.EqualTo(1+1));Because they mock the shit out of everything and have reduced their code to meaningless piles of fakes and mocks and arent actually testing what matters.
Do you do code reviews in meetings?
Honestly often… yes lol
Do you think testing and reviewing code was a waste of time before “AI”?
I would say a lot of it is, tbh, not all of it, but a huge amount of time is wasted on this process by humans for humans.
What the poster was getting at is a lot of these processes that USED to be INEFFICIENT now make MORE sense in the context of agents… you have vastly taken their point out of context.
Insane, yet reliably employed in the field for 30+ years - first and current job for more than a decade.












