freeone3000 19 hours ago

The author lives in a world where nothing has stakes. Where deploying without code review is a process optimization, not something that will break your certification on an audit and potentially the law (claude code doesn’t have a PEng cert). Where deploying a failure means that some users are mildly annoyed, rather than equipment loss and endangering lives.

It’s exactly the same high-churn no-regard-for-the-user that is modern “tech”. I would not use this approach on anything more serious than a SaaS, and I hope nobody else does either.

  • jdalton 19 hours ago

    I've lived the high stakes life. We can be this crazy passionate about being the HITL but that will eventually change.

    • freeone3000 4 hours ago

      Without an approved test harness? Doubtful. The exact viewpoint espoused in the article is that we should be deploying, agentically, into production, without testing, with no gating, and rolling back any failures. This is simply not a reasonable process. Current agentic development works much better with firm requirements and a solid test suite. In the future, we will still need the test suite, that is validated deterministically. Deploying bad software is not harmless.

  • throwaway314155 9 hours ago

    I think their example is a toy example but, a good analogy for what is likely to happen as AI is integrated more and more.

cadamsdotcom 21 hours ago

Maybe what is needed is selective gating - some PRs are the type you REALLY have to make sure are reviewed; others can go through a barrage of AI reviews (security, code-quality etc) and the author can merge.

Either human or AI (or both?) needs to be tagging PRs. Perhaps a traffic light system is appropriate? Red - needs close human review; green - AI review only; yellow - unclear / somewhere in the middle.

Using PRs to merge features gives auditability and traceability to which human merged which thing.

As always, it is situational. AI is exposing new shades of grey.

zerotolerance a day ago

The coding time is irrelevant and always has been. Time writing code has never been the high cost or challenge especially so in blue sky / green field development cases like those described in all these articles. And as far as ops processes go, we're already lightyears faster than we were even a decade ago. I could rant about ITIL and these integrated flow fantasies, but even that is a distraction.

The bulk of engineering time is spent engineering (not writing code), researching the right thing to build, reviewing plans with product ownership, considering operating context and constraints, adjusting designs and redeveloping based on learnings. Writing code is the easy part when everyone leaves you alone and you just cook. Those meetings aren't going anywhere because at the end of the day it takes a lot of back and forth to even come up with a relatively stable spec.

I agree that ops automation is important, but its hard to take this article seriously.

  • jdalton a day ago

    I see those type of meeting just being labeled context gathering in the future.

thwarted 18 hours ago

> Not just a quick hack...proper production-ready code with error handling, logging, environment checks, documentation, the entire implementation. Working, tested, committed to git. … This wasn't copy-paste from Stack Overflow. This was bespoke, production-ready code tailored to my specific Laravel app, following my existing patterns, with proper security considerations.

Why does any of this matter if you're barely bothering to look at/review it and you'll probably throw it out if there's a problem with it? If it only takes 34 seconds to generate, the whole thing is disposable and none of these things that are important for maintenance matter. It doesn't matter if it follows your existing patterns because you're not going to maintain it. Documentation doesn't matter, because you're not bothering to configure it or even understand how it works. And you're not going to figure out why it doesn't work, you're just going to ask the LLM to fix the bug or rewrite it for you.

You're trying to sell someone on using the bespoke, shrink-wrapped software generator by pointing out things that don't matter to people who want to use shrink-wrapped software, and do matter to people who are not interested in shrink-wrapped software.

> If you're a developer: Learn to work with AI tools. Not just as a fancy autocomplete, but as a collaborative partner. The developers who figure this out first will have an insurmountable advantage.

I guess if you can get it to generate code that looks like code you wrote, you can gloss over the fact that you didn't actually write it but still put your name next to it because you typed in the prompt. This is ordering food in a restaurant, and calling yourself a chef.

There's definitely utility, for some people in some situations, to be able to order food and have it delivered to them ready to eat. And there's utility to having a personal chef who will provide anything you ask for. But you don't call that cooking.

  • Cheer2171 17 hours ago

    > But you don't call that cooking.

    Software work today is closer to fast food work than it has ever been, even before gen AI. College students graduating from solid CS programs are flipping burgers right now because the job market has collapsed.

    I fucking hate it, but what is there to do about it?

  • jdalton 18 hours ago

    I provide AI samples of my work, with context, which it does pretty well.

    I have these patterns part of the command so it's no buried deep inside a context window. Claude is great at looking at the entire codebase and following the styles and approaches it comes across.

    • thwarted 18 hours ago

      That's great. But my point is that the style and approach doesn't matter when you can spend 30 seconds producing something that you'll never have a reason to look at the inside of and you can throw away and regenerate. "Look, it can write code just like yours!" is said as if that's a selling point to use it. Consistency, in style and approach, has been talked about for decades because it's important for the humans who are involved with the code. But using an LLM to generate the code also means no human will ever need to be maintaining it, or at least that's what's really being sold with LLM code generation, so none of the things that are important to humans matter at all.

hooverd a day ago

I'll give this credit for being good FOMO content marketing.

  • OutOfHere a day ago

    The article preys on the gullible who are naive enough to think that AI actually writes flawless code. It doesn't.

    • jdalton a day ago

      I don't think it said it wrote code flawlessly.

      • quectophoton a day ago

        They might not have said it explicitly, but heavily imply it with:

        > 34 seconds later, it was done. Not just a quick hack...proper production-ready code with error handling, logging, environment checks, documentation, the entire implementation. Working, tested, committed to git.

        > This wasn't copy-paste from Stack Overflow. This was bespoke, production-ready code tailored to my specific Laravel app, following my existing patterns, with proper security considerations.

        And with their proposed "AI-Optimized Process" conspicuously lacking code review, QA, manual approval. Just going straight from prompt to production, with no human supervision after the prompt is written. Otherwise those steps would have been mentioned in the list, the same way "Traditional API Integration Process" mentions them.

        At that point might as well add a button to Jira that says "Implement this".

        • toomuchtodo a day ago

          > At that point might as well add a button to Jira that says "Implement this".

          And just like that, people centuries of remediation/refactoring consulting work was created.

        • jdalton a day ago

          I wouldn't be surprised if Jira offered that in the next 6 months. Wait until AI gets deeper into PM work..scrum could die and I wouldn't shed a tear.

          • hooverd 4 hours ago

            With AI agents PMs will be able to run multiple meetings at the same time.

            A crucial mistake people make with AI is that assuming the others side won't have the same tools for time wasting.

jdalton a day ago

I asked Claude to integrate Google Indexing API. 34 seconds later, it was done. I hadn't even gotten the API keys by then. Crazy time to be alive.

  • PaulHoule a day ago

    Google is extra bad. 10+ years ago I did a shoot out of several machine learning APIs. All of them were less than 20 minutes to start running queries through, except for Google which took upwards of an hour and trashed every Python runtime on my machine.

  • Ronsenshi a day ago

    Sounds like we need a Google API which would allow us to create Google API keys so that we can ask LLM to do it.

    But then again you would need key for that first API... .

    • jdalton a day ago

      A centralized API key registry would be cool, but risky as hell.

Leynos a day ago

This will be the next benchmark suite: how long does it take your model to interactively retrieve an API key.

  • viraptor a day ago

    This is going a bit in the wrong direction. You don't need to do it to happen interactively. Models are fine writing terraform code which could handle all of that. But your company / processes have to be ready for that.

  • jdalton a day ago

    It could possibly become that.

andrewstuart a day ago

Big clouds - AWS,Google,Azure are so complex that even just getting an API key is painful expert level project that you might give up on.

I prefer smaller companies where you go to account settings and “download API key”.

This pain level is a genuine factor for me in using an big cloud service.

  • jdalton a day ago

    Oh gosh, don't me started on AWS!

7373737373 13 hours ago

It's the same complete bullshit with getting OAuth credentials, every time, everywhere

I haven't found a single service provider that made that step trivial

If using systems, securely, isn't trivial, then people will use other ways, or not use the system at all.

OutOfHere a day ago

The article is complete nonsense because AI generated code is often buggy, and always needs to be reviewed in detail. Also, the code can only be good if the prompt is good and detailed. All of this takes up a significant amount of time. It would seem that the author is technically incapable of reviewing code, which is why it's not even an afterthought.

  • jdalton a day ago

    It would seem you're far off.

bsder a day ago

So ... why couldn't the AI get the API keys?

Now THAT is a task that I would like AI to deal with for me.