padolsey 18 hours ago

> if you want to sculpt the kind of software that gets embedded in pacemakers and missile guidance systems and M1 tanks—you better throw that bot out the airlock and learn.

But the bulk of us aren't doing that... We're making CRUD apps for endless incoming streams of near identical user needs, just with slightly different integrations, schemas, and lipstick.

Let's be honest. For most software there is nothing new under the sun. It's been seen before thousands of times, and so why not recall and use those old nuggets? For me coding agents are just code-reuse on steroids.

Ps. Ironically, the article feels AI generated.

  • queenkjuul 15 hours ago

    Fundamentally, mission critical low level code isn't the kind of software i want to write anyway. I don't find AI tools super useful for most of the same reasons as the author, but i do kind of get tired of the idea that if you're not writing systems in C you're not really programming.

    I like writing front end code. I'm probably never going to have a job where i need or would even want to write a low level graphics library from scratch. Fine, I'm not red-eyed 3am hacker brained, but I'm passionate and good at what i do. I don't think a world where every person working in software has the author's mentality is realistic or desirable.

    • sgarland 6 hours ago

      > I like writing front end code. I'm probably never going to have a job where i need or would even want to write a low level graphics library from scratch. Fine, I'm not red-eyed 3am hacker brained, but I'm passionate and good at what i do.

      Keep that spirit! All I want from coworkers is genuine interest and curiosity. Not everyone is going to find investigating Linux’s networking stack interesting, just as not everyone is going to find making beautiful pure CSS animations interesting. I think one of the greatest mistakes the tech industry did was to create “full stack,” as though someone would have interest and skill in frontend, backend, and infra. Bring back specialists; we’re all better for it.

  • agos 4 hours ago

    from TFA:

    > Maybe you’ll never write the code that keeps a plane in the sky. Maybe you’ll never push bits that hold a human life in the balance. Fine. Most don’t. But even if you're just slapping together another CRUD app for some bloated enterprise, you still owe your users respect. You owe them dignity.

    • Ferret7446 3 hours ago

      > you still owe your users respect. You owe them dignity.

      This is moral grandstanding. You owe your customers a good product at a low cost. If you don't use a tool that can lower costs, you are wronging your users and will go out of business.

      Handcrafted CRUD will go the same way as handcrafted anything; an expensive niche hobby.

    • jonaustin 3 hours ago

      > you still owe your users respect. You owe them dignity.

      what does that even mean?

      Users don't care if code is written by a human or AI; they care that the code gives them what they need, hopefully in a fairly pleasant manner.

  • throwaway314155 18 hours ago

    I don't mind your rebuttal of the article, but to suggest that this particular article is AI generated is foolish. The style the author presents is vivid, uses powerful imagery and metaphor and finally, at times, is genuinely funny. More qualitatively, the author incorporates a unique identity that persists throughout the entirety of a long form essay.

    All of that is still difficult to get an LLM to do. This isn't AI generated. It's just good writing. Whether you buy the premise or not.

    • chrismorgan 14 hours ago

      Yeah, it feels a very different style of unhinged to LLMs. I can’t yet imagine an LLM producing such a beautiful and contextually-appropriate sentence as “They’ll be out there trying to duct-tape horses to an engine block, wondering why it doesn’t fly.”

      • nick3443 13 hours ago

        This brought tears to my eyes:

        But you—at your most frazzled, sleep-deprived, raccoon-eyed best—you can try. You can squint at the layers of abstraction and see through them. Peel back the nice ergonomic type-safe, pure, lazy, immutable syntactic sugar and imagine the mess of assembly the compiler pukes up.

        Amazing

    • a0123 30 minutes ago

      > The style the author presents is vivid, uses powerful imagery and metaphor and finally, at times, is genuinely funny. More qualitatively, the author incorporates a unique identity that persists throughout the entirety of a long form essay.

      This is incredible you would say that because you'll never guess what it reads like.

    • queenkjuul 15 hours ago

      I got slightly LLM vibes for the first few paragraphs ngl. It became very clear it wasn't very fast, though.

    • boxed 18 hours ago

      Maybe he meant it was long. Some people seem to think that long walls of text is how you spot AI slop.

      • nozzlegear 16 hours ago

        And god forbid you use an emdash these days.

        • xigoi 7 hours ago

          I’ve recently been accused of using ChatGPT because I wrote a message with formal language and bullet points.

  • anon7000 17 hours ago

    Yeah, and the article talks about those ways in which AI is useful. Overall, the author doesn’t have a problem with experts using AI to help them. The main argument is that we’re calling AI a copilot, and many newbies may be trusting it or leaning on it too much, when in reality, it’s still a shitty coworker half the time. Real copilots are actually your peers and experts at what they do.

    > Now? We’re building a world where that curiosity gets lobotomized at the door. Some poor bastard—born to be great—is going to get told to "review this AI-generated patchset" for eight hours a day, until all that wonder calcifies into apathy. The terminal will become a spreadsheet. The debugger a coffin.

    On the other hand, one could argue that AI is just another abstraction. After all, some folks may complain that over-reliance on garbage collectors means that newbies never learn how to properly manage memory. While memory management is useful knowledge for most programmers, it rarely practically comes up for many modern professional tasks. That said, at least knowing about it means you have a deeper level of understanding and mastery of programming. Over time, all those small, rare details add up, and you may become an expert.

    I think AI is in a different class because it’s an extremely leaky abstraction.

    We use many abstractions every day. A web developer really doesn’t need to know how deeper levels of the stack work — the abstractions are very strong. Sure, you’ll want to know about networking and how browsers work to operate at a very high level, but you can absolutely write very nice, scalable websites and products with more limited knowledge. The key thing is that you know what you’re building on, and you know where to go learn about things if you need to. (Kind of like how a web developer should know the fundamental basics of HTML/CSS/JS before really using a web framework. And that doesn’t take much effort.)

    AI is different — you can potentially get away with not knowing the fundamental basics of programming… to a point. You can get away with not knowing where to look for answers and how to learn. After all, AIs would be fucking great at completing basic programming assignments at the college level.

    But at some point, the abstraction gets very leaky. Your code will break in unexpected ways. And the core worry for many is that fewer and fewer new developers will be learning the debugging, thinking, and self-learning skills which are honestly CRITICAL to becoming an expert in this field.

    You get skills like that by doing things yourself and banging your head against the wall and trying again until it works, and by being exposed to a wide variety of projects and challenges. Honestly, that’s just how learning works — repetition and practice!

    But if we’re abstracting away the very act of learning, it is fair to wonder how much that will hurt the long-term skills of many developers.

    Of course, I’m not saying AI causes everyone to become clueless. There are still smart, driven people who will pick up core skills along the way. But it seems pretty plausible that the % of people who do that will decrease. You don’t get those skills unless you’re challenged, and with AI, those beginner level “learn how to program” challenges become trivial. Which means people will have to challenge themselves.

    And ultimately, the abstraction is just leaky. AI might look like it solves your problems for you to a novice, but once you see through the mirage, you realize that you cannot abstract away your core programming & debugging skills. You actually have to rely on those skills to fix the issues AI creates for you — so you better be learning them along the way!!

    Btw, I say this as someone who does use AI coding assistants. I don’t think it’s all bad or all good. But we can’t just wave away the downsides just because it’s useful

    • jgraettinger1 14 hours ago

      > On the other hand, one could argue that AI is just another abstraction

      I, as a user of a library abstraction, get a well defined boundary and interface contract — plus assurance it’s been put through paces by others. I can be pretty confident it will honor that contract, freeing me up to not have to know the details myself or second guess the author.

    • seanmcdirmid 17 hours ago

      > Btw, I say this as someone who does use AI coding assistants. I don’t think it’s all bad or all good. But we can’t just wave away the downsides just because it’s useful

      Isn't this just the rehashed argument against interactive terminals in the 60s/70s (no longer need to think very carefully about what you enter into your punch cards!), debuggers (no longer spending time looking carefully at code to find bugs), Intellisense/code completion (no need to remember APIs!) from the late 90s, or stackoverflow (no need to sift to answer questions that others have had before!) from the 00s? I feel like we've been here before and moved on from it (hardly anyone complains about these anymore, no one is suggesting we go back to programming by rewiring the computer), I wonder if this time it will be any different? Kids will just learn new ways of doing things on top of the new abstractions just like they've done for the last 70 years of programming history.

      • sgarland 17 hours ago

        Interactive terminals didn’t write code for you, and also unlocked entirely new paradigms of programs. Debuggers, if anything, enabled deeper understanding. Intellisense is in fact a plague and should not exist. Stack Overflow, when abused, is nearly as bad as AI.

        • seanmcdirmid 13 hours ago

          I think we should just agree to disagree. All of those opened up new paradigms for programming, and so will AI even if we aren’t quite sure what that new paradigm is yet. There will always be people claiming the old-fashioned way is better, like Dijkstra’s famous complaint about kids not using punch cards anymore and how that meant they weren’t learning how to be good programmers.

          • pona-a 11 hours ago

            We're actually quite certain what this new paradigm is, because some poor souls are already practicing it: slop coding. You prompt-whisper poorly-defined changes to make, and if the machine chokes on its own vomit along the way, you delete everything and try again.

            It feels reasonable, consistent to see it as another "old man yells at the skies" scenario, but I do think it's unprecedented for a machine to automate thought itself on an unbounded domain and with such unreliability. We know calculators made people worse at mental math, but at least calculators don't give you off-by-one errors 40–60% of the time with no method of verification.

            The reason why we haven't lost literacy to Speakwrites and screen readers is because they required more time and effort than doing it yourself. With AI, the supposed time savings are obvious: you don't put hours into reading the source to write an essay, you just ask ChatGPT, you don't learn programming fundamentals, you just ask for a script that does X, Y, and Z, etc... It feels like a good choice, but you're permanently crippling you education, both in a structured course and in the wild, and the supposed oracle is a slot machine, costing you $avg_tokens*$model_rate a pull. The poor news is slot machines sell.

            • seanmcdirmid 4 hours ago

              I don’t think we’ve really figured out how to use AI in coding yet, vibe coding doesn’t really feel like it’s it. Vibe coding and just generating code like how some people claim intellisense is just to save on typing, when it’s actually a great in-situ browse what members can be selected on a value of a certain type.

              There is definitely a way to abuse AI in programming, but it doesn’t seem to be very compelling and I don’t think it will get people who do that very far (eg relying on intel sense to save on typing rather than just learning how to type).

              ChatGPT is a great writing tool if you already know how to write. You can curate and modify on top of it, allowing you to write your paper faster with the same amount of quality. But again people just using it to write essays or paper without knowing how to write themselves aren’t going to get good results.

              • pona-a 3 hours ago

                I understand what you mean, but let's be honest: this is a rare kind of tool that's more useful to feign competence, deceive yourself and others, and produce industrial volumes of slop than it is to do better work. IntelliSense is just dynamic documentation, which has existed since at least Emacs — it doesn't do the thinking for you.

                Professional tools, from music notation and art to typesetting and programming, are about translating an image inside your mind into something physical. When you know what you're doing, the lack of an interpretable mapping between prompt and generation means you spend more time trying to describe what you want to write instead of just writing it. I'd be much happier with code generation if it could take a formal specification and either return an error or something that provably implements it. Maybe interpretability research will one day change that, but as they are now, they're simply not tunable or reliable enough to be used as tools. And yes, prompting doesn't count when they increasingly disregard your instructions.

                There are many valid uses: I have a tiny WolframAlpha-like script that lets me type some basic computations and the LLM translates that to Python. I sometimes use LLM completions to get some inspiration when writing prose — while I usually discard them, they still help me think. They can often act as better grammar checkers than LanguageTool, and they make a nice companion to smaller translation models, both having their own quirks.

                But most of this doesn't need these larger and larger models; I haven't yet tried, but I think fine-tuning some mid-size open-weights LLMs will yield similar or better results. The industry sold the public AGI, not better auto-complete, a fuzzy parser, or a smarter translator, and now they're burning growing piles of money on a saturated research direction to maintain the delusion singularity is 5 months away.

  • ivape 18 hours ago

    It’s funny because I made a few funny clips (to my taste) on Google Whisk and figured, hey why not, let’s make a TikTok. Did you know that all of TikTok is full of millions of ai generated stuff or other people just copying each others stuff? I really thought there was something to this “original creation” stuff.

    We are all so simply reproducible. No one’s making anything special, anywhere, for the most part. If we all uploaded a TikTok video of daily coding, it would be the same fucking app over and over, just like the rest of TikTok.

    Elon may have be right all along, there’s literally nothing left to do but goto Mars. Some of us were telling many of you that the LLMs don’t hallucinate as much as you think just two years ago, and I think the late to the party crowd need to hear us again - we humans are not really necessary anymore.

    !RemindMe in 2 years

    • whattheheckheck 16 hours ago

      There's literally genocide and war going on, solve that

      • queenkjuul 15 hours ago

        Don't encourage them. They'll just build an AI to do the genocide faster.

      • ivape 11 hours ago

        The genocide is televised, it appears no one cares.

felubra 17 hours ago

> The machine is real. The silicon is real. The DRAM, the L1, the false sharing, the branch predictor flipping a coin—it’s all real. And if you care, you can work with it.

This is one of the most beautiful pieces of writing I’ve come across in a while.

  • bwfan123 17 hours ago

    Same, The author writes like Dave Barry. I burst out laughing more than once. He was able to articulate with a lot of humor exactly what I think of co-pilot.

lunarcave 17 hours ago

The things that's most often missed in these discussions that "writing code" is the end artefact. It doesn't take into account the endless tradeoffs made in producing the said artefact - the journey to get there.

Just try implementing a feature with a junior, in a mildly complex codebase and you'd catch all the unconscious tradeoffs that you're making as an experienced developer. AI has some concept of what these tradeoffs are, but that's mostly by observation.

AI _does_ help with writing code. Keyword there being - "help".

But thinking is the human's job. LLMs can't/don't "think". Thinking how to get the AI to produce the output you want is also your job. You'd think less and less if models get better.

CGamesPlay 18 hours ago

This resonates with me, for sure; both the benefits and the drawbacks of copilot. But while I think kids and hackers were artisans, engineers were always just engineers. The amazing technical challenges they had to solve to create some of the foundational technologies we have today, exist because they had to solve those challenges. Looking at only these and saying "that's how things used to be" is survivorship bias.

  • gleenn 17 hours ago

    I feel privileged to be able to say that as a software engineer who's been doing it the hard way for 20+ years, I relish the hard problems. The CRUD app updates are unbearable without the random in-between challenges that bend my mind. The rare recursive algorithm, the application of some esoteric knowledge I actually learned in college, actually having to do big-o estimates. These are the gems of my career that keep me sane. I hope the next flock of AI-driven SWEs appreciates these things even more given the AI can spout off answers which sometimes are right and sometimes are horribly wrong. Challenges like these will always have to have someone who actually knows what to do when the AIs start hallucinating or the context of the situation is beyond the context window.

jboggan 16 hours ago

This is the crux of the piece to me:

"We'll enshrine this current bloated, sluggish, over-abstracted hellscape as the pinnacle of software—and the idea of squeezing every last drop of performance out of a system, or building something lean and wild and precise, will sound like folklore."

This somewhat lines up with my concerns about libraries and patterns before 2023 getting frozen in stone once we pass over the event horizon where most new code to train on is generated by LLMs. We aren't innovating, we are going to forever reinforce the screwed up dependency stack and terrible kludges of the last 30 years of development. Javascript is going to live forever.

Jcampuzano2 18 hours ago

As a preface, I think lots of people will not like this take.

A lot of people are going to have to come to the realization that has already been mentioned before but many find it hard to grasp.

Your boss, stakeholders, and especially non-technical people literally give 0 fucks about "quality code" as long as it does what they want it to do. They do not care about tests insofar as if it works it works. Many have no clue about nor do they care about whether something just refetches the world in certain scenarios. And AI whether we like it or not, whether it repeats the same shit and isnt DRY, doesn't follow patterns, reinvents the wheel, etc - is already fairly good at that.

This is exactly why all your stakeholders and executives are pushing you to use it. they've been fed that it just gets shit done and pumps out code like nothing else.

I really think a lot of the reason some people say it doesn't give them as much productivity as they would like is due largely to a desire to write "clean" code based on years and years of our own training, and due to having to be able to pass code review done by your peers. If these obstacles were entirely removed and we went full bandaid off I do think AI even in its current state is fairly capable of replacing plenty of roles. But it does require a competent person to steer to not end up in a complete mess.

If you throw away the guardrails a little bit and not obsess about how nice code looks anymore, it absolutely will move things along faster than you could before.

  • allthenopes25 18 hours ago

    A sane person writes clean code because they are going to have to maintain it themselves one day a few years into the future when the baby kept them up all night for three nights in a row and coffee isn't working for them anymore and they can't remember anything about it and it's falling over so it's really urgent and f**** those guys they swore they'd get someone to take this on with a proper handover and surely there was a goddamn sent email somewhere about it but nothing is coming up when you search and goddamnit it used to compile did people ignore your comment about how it won't build with the new version yet so don't update the build tools and and and

    You write good code because you own it.

    If you get ChatGPT or Copilot or Claude or whateverthe****else to write it, you're going to have a whole lot less fun when it's on fire.

    The level of irresponsibility that "vibe coding" is introducing to the world is actually worse than the one that had people pouring their savings into a shitcoin. But it's the same arseholes talking it up.

    • the_snooze 5 hours ago

      >The level of irresponsibility that "vibe coding" is introducing to the world is actually worse than the one that had people pouring their savings into a shitcoin. But it's the same arseholes talking it up.

      It's a broader ethos of irresponsibility and disposability that's infected so much of modern tech. Why polish up a game when you can ship it now and fix it later? Why delay an iPhone release that's "Built for Apple Intelligence (TM)" when you can sell the hype today and maybe deliver next quarter?

      Everyone wants the reward but none of the work and accountability. You're not even doing the minimum level of work; you're just putting up the appearance of effort. Who cares if what you built is just a brittle husk? Vibe coding is just the natural progression of that cynical philosophy.

  • bravetraveler 18 hours ago

    I suspect experienced folks are well aware of the subjectivity of quality. How? Change control.

    Quality control exists until The Business deems otherwise. The reasons vary: vulnerability, promotion, whatever. Usually not my place to say.

    Personally, my 'product' isn't code. Even the 'code' isn't code. For every 8 hours of meetings I do, I crank out maybe 20 lines of YAML (Ansible). Then, another 4 hours of meetings handing that out/explaining the same basics for Political Points.

    The problem(s) relating to speed or job security have remarkably little to do with code; generated or not. The people I work with are functionally co-dependent because they don't use LLMs or... manuals.

    All this talk about "left behind"... to survive a bear, one doesn't have to be the fastest. Just not the slowest.

  • lukan 18 hours ago

    I lost all my snobism about that years ago and I do just follow the paradigm, "does it work". But for some reasons, even with AI I ain't on another level.

    And those reasons are, it all collapses very quickly once the complexity reaches an medium amount.

    And if I want to rely on things and debug them - I cannot just have a pile of generated garbage, that works as long as the sun is shining. For isolated tasks it works for me. For anything complex, I am faster on my own.

    • Jcampuzano2 18 hours ago

      Most replies here make the claim that all AI generated code is "garbage". And I can't help but think most of the people who say that do not actually use it in their day to day with the most recent models and actually give it good instructions/requirements.

      No, it is not always perfect. Yes you will have to manually edit some of the code it generates. But yes it can and will generate good code if you know how to use it and use sophisticated tools with good guidance. And there are times where it will even write better more performant code than you could given the time requirements.

      • mplanchard 17 hours ago

        It writes code well enough in simple contexts, sometimes. But that code is also easy to write, indeed often easier to write than to review. It struggles in more complex contexts and with more complex constraints. Unfortunately, it’s the latter case where I most often have any desire to reach for an aid, and it has failed so consistently and so often there that I have largely stopped trying.

        It’s nice when you need to do something simple in an unfamiliar but simple context, though.

        It seems though that a lot of the narrative here from its proponents is that we’re just not trying hard enough to get it to solve our problems. It’s like vimmers who won’t shut up about how it’s worth the weeks of cratered productivity in order to reach editing nirvana (I say this as one of them).

        Like with any tool, the learning curve has to be justified by the results, but the calculation is further complicated by the fact that the AI tooling landscape changes completely every 3-6 months. Do I want to spend all that time getting good at it now? No. I’ll probably spend more time learning to use it when it’s either easier to get results that actually feel useful or when it stops changing so often.

        Until then I’ll keep firing it up every once in a while to have it write some bash or try to get it to write a unit test.

      • lyu07282 17 hours ago

        I think the root of the argument is that the AI critics are worried because they have the assumption that 1) you aren't experienced enough to know what "good code" looks like, and/or 2) you only care if it "works" and don't understand all the implications bad code will have downstream, again because of inexperience.

        • lukan 11 hours ago

          If the code is a isolated module or function, all is well.

          Otherwise often not. But I am not worried. Give me a AI tool that can work with my whole codebase reliable and I gladly use it.

  • edoceo 18 hours ago

    This is true. But the cost after is high. Creating code, by hand or AI is easier than maintaining or modifying the system.

    And this is where a problem (still) appears - except now the AI-assiated authors have even less comprehension of the system.

  • lamename 18 hours ago

    They will demand use of AI tools for "productivity" and then complain when there are bugs in prod without realizing the root cause.

    • Jcampuzano2 18 hours ago

      They won't blame the AI as the root cause. They will blame you.

      This is why I mention you need to be competent enough to understand what is being generated or they will find someone else who does. There's no 2 ways around it. AI is here to stay.

      • lamename 17 hours ago

        Yes I agree. They will and should blame the human. That's a problem when the human isnt given enough time to complete projects because "AI is SO productive"

      • yencabulator 17 hours ago

        The idea that you will actually really review all that vibecoded slop code comes across as very naive.

        The real question is how many companies have to accidentally expose their databases, suffer business-ruining data losses, and have downtime they are utterly unable to recover from quickly before CxOs start adjusting their opinions?

        Last time I saw a "Show HN" of someone showing off their vibecoded project, it leaked their OpenAI API key to all users. If that's how you want to run your business, go right ahead.

      • drekipus 17 hours ago

        > you need to be competent enough to understand what is being generated

        We're all competent enough to understand what is generated. That's why everyone is doomer about it.

        What insights do you have above us when the LLM generates

            true="false"
            while i < 10 {
              i++
            }
        
        What's the deep philosophical understanding that you have about this that makes us all sheeple for not understanding how this is actually the business's goose laying golden eggs. Not the engineers.

        Frankly. Businesses that use this, drop all their actual engineers, and then fall over when the slightest breeze comes.

        I am actually in favour, in a accelerationist sense.

  • joks 17 hours ago

    And then you have 0 domain experts because nobody has built the mental model of the code and what it's doing that you inherently build when you're actually doing the problem solving and code-writing yourself.

  • charles_f 15 hours ago

    As long as I am going to be the one being called at night when there's a crash, I'll be the one to dictate what's "good enough". Throw away guardrails if you want to, I like sleeping.

  • nkrisc 17 hours ago

    Then who fixes it when it stops working? Or do you just par each other on the back and fold the company?

  • boxed 18 hours ago

    Clean code is nice.

    Working code is a requirement.

    You missed the point. AI slop doesn't just fail on point 1. It fails on point 2.

    • Jcampuzano2 18 hours ago

      I work in an enterprise company where we just recently got access to use cursor, before that copilot.

      We have literally stood up entire services built practically entirely with AI that are deployed right now and consumers are using.

      AI does work with competent people behind the wheel. People can't keep hiding behind saying that it always churns out code that doesn't work. We are way past those days. If you don't you will end up losing your job. Theres no way around it. The problem is we may end up losing our jobs either way.

      • MeetingsBrowser 18 hours ago

        > entire services built practically entirely with AI

        What kind of services and how complex are they?

        I've been using Cursor for a year and struggle to get the agent to make competent changes in a medium sized code base.

        Even something isolated like writing a test for a specific function usually take multiple rounds of iteration and manual cleanup at the end.

        • Jcampuzano2 17 hours ago

          Most services that are written today (Not just talking about my company/experience, I'm talking broadly) are not complex. Many are basically a copy paste of the same types of things that have been built thousands of times before. Now with that knowledge and knowing how LLM's should work it should come as no surprise that AI will be able to spin up new ones quite competently and quickly.

          Regarding tests: that is also something I find and many of my peers find that LLM's excel at. Given X inputs and Y outputs an llm will spit out a whole suite of tests for every case of your functions without issue except in complicated scenarios. End to end tests it may not do quite as well at since usually it requires a lot of externalities/setup, but it can help with generating some of the setup and given examples it can build from there. Of course this depends on how much you value those tests, since some don't even think tests are that useful nowadays.

      • boxed 17 hours ago

        > AI does work with competent people behind the wheel

        So does extremely junior devs that are really bad but you code review EVERYTHING.

        (Except jr programmers can learn, AI models can't really, they can be retrained from scratch by big corporations)

        • Jcampuzano2 17 hours ago

          The AI will do it faster and cheaper than the junior does and thats what the company cares about.

          Not to mention you should still be code reviewing it anyway. In fact with AI you should be reviewing even more than you were before.

          • Supermancho 17 hours ago

            > The AI will do it faster and cheaper than the junior does and thats what the company cares about.

            Short term. Not long term. The AI will never become a staff developer. Shifting review on to the senior developers is shifting responsibility and workload, which will have the expected outcome. Slower development cycles as you have to consider every footgun. Especially when the AI can't explain the reasoning for esoteric changes. I ask a Jr, it's likely they have a test (codified or manual) that led them to the decision.

            • icedchai 15 hours ago

              I’ve had AI generate code in a few hours that would take some juniors I’ve worked with weeks. Yes, the code was not the best (very long methods) and it took some iteration, but everything has trade offs.

      • sanderjd 17 hours ago

        Yep, agreed. I can no longer relate to people who don't recognize how powerful a tool like Cursor can be.

        But I also can't relate to people who think they can, today, build fully working software using just AIs, without people who know how software works and are able to understand and debug what is being generated.

        Maybe it's true that this will no longer be the case a year from now. I honestly don't know. But at the moment, I think being a skilled practitioner who is also able to effectively use these powerful new tools is actually a pretty sweet spot, despite all the doom and gloom.

        • sgarland 17 hours ago

          Cursor has been confidently incorrect repeatedly when discussing databases at my job. It doesn’t understand how MySQL works, and wants to make terrible indexing decisions because of it. You wouldn’t know that it’s wrong unless you already know the correct answer, because what it recommended will work, it’ll just be bloated and sub-optimal. And therein lies the problem: computers are so fast, people will happily assume that it worked, and then later will scale the size up when the half-baked solution shows its cracks.

          • sanderjd 16 hours ago

            Yeah. But you aren't disagreeing with what I wrote.

            I think it's breaking a lot of brains that we have these tools now that are useful but not deterministically useful.

            • sgarland 6 hours ago

              Fair point. I don’t hate AI, and use it sometimes, but I’m always painfully aware that it can and will make mistakes, some subtle, that must be caught by someone who already knows most of the answer.

      • icedchai 15 hours ago

        I had AI do some tedious work, like migrating to new APIs, upgrading deprecated calls, fixing warnings in old C code. It worked great. Faster than I could do it myself.

      • pron 17 hours ago

        When I've seen examples of such cases, my impression was that they could be improved by taking the AI out of the picture and using some "low code" solution.

      • davidcbc 17 hours ago

        Let us know how those services are doing in 2 years

  • rester324 18 hours ago

    Well that might be true, but I think the reason why they are blatantly acting like that because stakeholders and decision makers are detached from their decisions timewise.

    That's why I bring such topics as maintenance and stability very early on into the discussions and ask those stakeholders how much system downtime they can tolerate, so that they can feel the weight of their decision making, and that gives me an opportunity to explain why quality matters.

    Then it's up to them to decide how much crap they tolerate.

    • BlueTemplar 7 hours ago

      Anthropogenic climate change and resource depletion comes to mind...

  • nalekberov 18 hours ago

    > I really think a lot of the reason some people say it doesn't give them as much productivity as they would like is due largely to a desire to write "clean" code based on years and years of our own training, and due to having to be able to pass code review done by your peers.

    Machine doesn't get mad when an app takes forever to start or keeps constantly crashing, but we humans do. Writing "clean" code has the least importance when it comes to machine generated code.

    • Jcampuzano2 18 hours ago

      People keep reading off this nonsense that AI generated code is always bad or slow.

      This is so far from the truth that I really think anybody who still says this has not actually used it for anything real in at least a couple years.

      Yes, I'm not saying it will always generate you the best code, sometimes it may even be bad.

      What I am saying is it CAN generate code that is reasonably performant, sometimes even more performant than you would have written it given time constraints, and fulfills requirements (even if sometimes it requires a little bit of manual effort) much faster than we ever could before.

      • sgarland 6 hours ago

        It generates reasonably performant code because most of the industry isn’t writing code that’s computationally bound. If you have to wait for network I/O anyway, it doesn’t really matter if your code is optimal, because that wait will dominate everything else.

      • drekipus 17 hours ago

        I can only assume that you have significant experience being a junior engineer

        • Jcampuzano2 6 hours ago

          First off, I would hope that everyone had experience being a junior engineer for some time.

          But if your assertion is that using AI for code generation and being successful with it makes you a junior engineer, then good luck keeping your job in the future. Just take a look at social media and there are a plethora of examples of prominent engineers using it with success.

OnionBlender 13 hours ago

The author is clearly a C++ programmer. I've been noticing that these AI tools are worse at C++ than other languages, especially scripting languages. Whenever I try to learn from people that are using these tools successfully, they always seem to be using a scripting language and working on some CRUD app.

  • sgarland 6 hours ago

    They seem to be a game dev judging by their other posts. I imagine there’s a lot less content online about that for LLMs to scrape than yet another CRUD app.

malfist 18 hours ago

I feel this in my bones. Every day I'm getting challenged by leadership that we're not using AI enough, told that I should halve my estimates because "we'll use AI", and being told that there's a new AI tool that I have to adopt because someone is tracking KPIs related to adoption and if our team doesn't adopt enough AI tools we're going to be fired to give more headcount to those that do.

It's like the world has lost it's goddamn mind.

AI is always being touted as the tool to replace the other guy's job. But in reality it only appears to do a good job because you don't understand the other guy's job.

Management has an AI shaped hammer and they're hitting everything to see if it's a nail.

  • bluefirebrand 18 hours ago

    > Management has an AI shaped hammer and they're hitting everything to see if it's a nail.

    I really think we need to figure out how to cut back on management so we can get back to the business of actually doing work

    • ghaff 18 hours ago

      Coordinating teams, talking to stakeholders/customers (including spending a lot of time with them), having someone manage individual contributors at some level, etc. is work that can't just be ignored at a company of any size. The only way to avoid (a lot of) it is to be very small and that has its own set of issues.

      • bluefirebrand 4 hours ago

        Sure, but do we really need four layers of people to do all of that?

        It's really common to see just layers and layers of management at companies that get big enough

      • lyu07282 18 hours ago

        I mean don't bite the boot that you can lick amirite

    • cjbgkagh 18 hours ago

      Well how hard would it be to replace management with AI? Perhaps a developer could use AI to recreate the other tasks of the company without all of the overhead of actual people.

      • queenkjuul 14 hours ago

        Yeah i can't wait to discuss product with a sycophantic chatbot instead of the people who actually have a stake in the product.

        Management can and usually does suck but i can reason with a person, for now. And sadly only the product people actually know what they want, usually right when you've built it the way they used to want it lol.

        • cjbgkagh 3 hours ago

          Sounds almost has hard as writing code

    • yencabulator 17 hours ago

      Clearly the answer is to replace them with AI.

  • sanderjd 17 hours ago

    Yeah, I mean, this is just the current phase of the hype cycle. It'll settle down. Some of the tools and techniques will have staying power, most won't. If you can figure out which is which and influence others, you'll be in good shape.

  • nyarlathotep_ 18 hours ago

    > I feel this in my bones. Every day I'm getting challenged by leadership that we're not using AI enough, told that I should halve my estimates because "we'll use AI", and being told that there's a new AI tool that I have to adopt because someone is tracking KPIs related to adoption and if our team doesn't adopt enough AI tools we're going to be fired to give more headcount to those that do.

    This--all of this--seems exactly antithetical to computing/development/design/"engineering"/architecture/whatever-the-hell people call this profession as I understood it.

    Typically, I labored under the delusion that competent technical decision makers would integrate tooling or choose to use a language, "service", platform, whatever, if they saw benefits and if they could make a "case" for why something was the correct approach, i.e how it met some product's needs, addressed some shortcomings, made things more efficient.

    Like "here's my design doc, I chose $THING for caching for $REASON and $DATASTORE as it offers blah blah"

    "Please provide feedback and questions"

    This is totally alien to that approach.

    Ideally, "hey we're going to use CoPilot/other LLM thingy, let us know if it aids your workflow, give us some report in a month and we'll go from there to determine if we want to keep paying for it"

  • lamename 17 hours ago

    > AI is always being touted as the tool to replace the other guy's job. But in reality it only appears to do a good job because you don't understand the other guy's job.

    This is a well considered point that not enough of us admit. Yes many jobs are rote or repetitive, but many more jobs, of all flavors, done well have subtleties that will be lost when things are automated. And no I do not think that some "80% done by AI is good enough" because errors propagate through a system (even if that system is a company or society), AND the people evaluating that "good enough" are not necessarily going to be those experienced in that same domain.

  • ivape 18 hours ago

    But, management is the one to go soon. The other shoe is going to drop dear brother, this I promise you. Stay strong.

  • abletonlive 18 hours ago

    Well when you have a hammer big enough everything is indeed a nail.

    Have you considered that instead of resisting you should do more to figure out why you're not getting the results that a lot of us are talking about? If nothing has changed for you in the past 2 years in your productivity the problem is most likely you. Don't you think it's your responsibility as an engineer to figure out what you're doing wrong when there are a lot of people telling you that it's a life changing tool? Or did you just assume that everybody was lying and you were doing everything correctly?

    Sorry to say it. It's an unpopular opinion but I think it's pretty much a harsh truth.

    • dpistole 18 hours ago

      > why you're not getting the results that a lot of us are talking about?

      IMO the problem occurs when "the results" are hyped up linkedIn posts not based in reality, AI is a boon but it's not lived up to the "IDEs are a thing of the past, youre all prompt engineers now" expectations that we hear from executives

    • MeetingsBrowser 18 hours ago

      Kind of brutal, but if LLMs drastically improved your productivity I think it speaks more to your baseline productivity than the power of LLMs.

      • abletonlive 17 hours ago

        What's more likely

        A) all of this money being funneled into tech to build out trillions of dollars worth of infrastructure, a month over month increasing user base buying subscriptions for these llm services, every company buying seats for LLM because of the value that it provides - these people are wrong

        B) yappers on hackernews that claim they derive no productivity boost out of llms while showing absolutely nothing about their workflow or method when the interface is basically a chat box with no guardrails - these people are wrong

        Sorry I'm going to be it's B and you just suck at it

        • MeetingsBrowser 17 hours ago

          Just like the trillions poured into blockchain have revolutionized the internet.

          All the jaw dropping ICOs, million dollar NFTs, and cryptocurrency price surges. Surely that proves its value in our daily lives.

          • abletonlive 15 hours ago

            If you think blockchain is a comparable analogy, on the same timeline, you are orders of magnitude off

            Actually by the numbers AI is already bigger than bitcoin in both adoption and market value, so I'm not sure if you are making the point that you think you're making.

        • davidcbc 17 hours ago

          Yeah, definitely A, same reason why all banking is done exclusively on the blockchain now and NFTs are the only way to get music

        • mplanchard 17 hours ago

          Regarding A, first time on the hype train? VC and silicon valley funding is often completely divorced from any real value, and is one of the last places I’d look for reliable signal on quality.

          Regardless, I’m sure it’s a little of A and a little of B, plus some of C) yappers on Hackernews who think that the majority of the work of software engineering is writing code, and who generally write code in sufficiently simple contexts for the LLMs to produce something equivalent to their normal output.

        • queenkjuul 11 hours ago

          Honestly millions of nobodies buying a product from a tech company is basically proof it's nonsense, in my limited mind. Which one have they bought in droves that actually had a massive impact on you as a developer?

      • sanderjd 17 hours ago

        People keep saying this kind of thing, but sorry, it's nonsense.

        Many of my colleagues that I most admire are benefiting greatly and increasingly from LLM tooling.

        • MeetingsBrowser 16 hours ago

          LLM tooling is useful. I have been using it on a daily basis for at least a year.

          I am maybe 10-20% more productive at certain tasks in the long run (which is pretty good!). Nowhere close to to the 10x or even 2x boost people are claiming.

          If LLMS were really making software developers 10x more productive over the last year, we would be seeing massive shifts in the industry. In theory either 90% layoffs or 10x product velocity.

          • sanderjd 2 hours ago

            Agreed. The noisiest people are those saying "it makes everyone 100x more productive!" and those saying "it's useless, it makes everyone less productive!". But the boring truth is somewhere in between those extremes.

    • oh_my_goodness 18 hours ago

      "Well when you have a hammer big enough everything is indeed a nail."

      I think this pretty much speaks for itself.

    • malfist 18 hours ago

      Where did you see that I didn't use AI and that _nothing_ has changed for me?

heddycrow 16 hours ago

I couldn't help but read parts of this in Bertram Gilfoyle's voice.

Someone tell me I'm not alone.

swyx 18 hours ago

i think the key is always having the ability to telescope - think coding agents enable you to stay high level, but you always need the ability to go down and fix/understand code when needed.

dlnovell 13 hours ago

Beautiful and witty prose to say "vibe coding sucks". He's not at all wrong about the state of AI coding in May of 2025. The 3 hours I just burned trying to get it to correct output bugs in a marimo notebook (which I started learning this week) is demonstrable evidence.

But it completely ignores the fact that AI generated code is getting better on a ~weekly basis. The author acknowledges that it is useful in some contexts for some uses, but doesn't acknowledge that the utility is constantly growing. We certainly could plateau sometime soon leaving us in the reckless intern zone, but I wouldn't bet on it.

  • queenkjuul 11 hours ago

    Far as I'm concerned, it's getting better every week in the way Tesla self driving got better every week. Is it closer to its goal? Yes. Does it give value to its users? Arguably yes, but not really.

    Is it, in 2025, actually better than a real human at its designated task? Pretty universally no.

    So i won't be surprised when the "last 10%" of software AI takes 30 years to close the gap that 20 years of "immanent self driving" is still yet to close.

    We should all understand, i would think, that the last 10% is the hard part.

webprofusion 16 hours ago

It's right, AI does require giving up control and letting things be done differently, depends how much you really use it.

It's the same when you get a junior dev to work on things, it's just not how you would do it yourself and frequently wrong or naive. Sometime is brilliant and better than you would have done yourself.

That doesn't mean don't have junior devs, but having one doesn't mean you don't have to do corrective stuff and refinements to their work.

Most of us aren't changing the world with our code, we're contributing an incredibly small niche part of how it works. People (normal people, lol) only care what your system does for them, not how it works or how great the code is.

airstrike 17 hours ago

I think all arguments pro and against AI assistants for coding should include a preface that describes the programing language, the domain of the app, the model being used and the chosen interface for interacting with the assistant.

Otherwise everyone's just talking past each other.

  • joshstrange 8 hours ago

    That’s probably asking for too much but I agree.

    Here are some terms/aspects of LLMs that people _regularly_ use, yet 10 people have 10 definitions of what it means (to them)

    - Vibe Coding

    - Boilerplate

    - Copilot

    - Cursor/Aider/Claude Code/Codex/OpenHands/etc

    - LLM Autocomplete and/or inline code suggestion

    - LLM Agent

    I’m happy to explain or expand on any of those if it’s not clear what I mean.

sgarland 17 hours ago

> The real horror isn’t that AI will take our jobs—it’s that it will let people in who never wanted the job to begin with.

I fully agree. This already happened with the explosion of DevOps bullshit, where people with no understanding of Linux got jobs by memorizing abstractions. “Stop gatekeeping,” they say. “Stop blowing up prod, and read docs” I fire back.

  • alexjplant 16 hours ago

    > explosion of DevOps bullshit

    The fact that somebody can "be DevOps" or work as a "DevOps Engineer" is exemplary of the fact that DevOps as conceived and DevOps as practiced are two very different things. The former would be engineers taking ownership of deployment, collaborating horizontally, and practicing tight feedback loops. DevOps as practiced is the time-honored tradition of a dev team and a cloud team playing tennis with a grenade that is a questionably-stable SaaS that people volley back and forth with rackets like "let's roll the pods" or "it worked on my machine".

    > people with no understanding of Linux got jobs

    This happens in every industry with every job title. I've worked with Senior+ developers that mutated React props, didn't know how to use Git, couldn't read Java stack traces, etc. I myself have been paid money to do a myriad of things that I have no business doing (like singing or playing guitar or mixing cocktails). It's the way of the world.

    • sgarland 6 hours ago

      The platonic ideal of DevOps is fine, yes. The problem I’ve consistently run into is there is a vanishingly small percentage of devs who want to do Ops, or even consider that a server with real constraints is running their code. Performance takes a backseat to DX, which AFAICT is code for “I don’t want to do anything but write code and push.”

      I should note that I think this is fine, if and only if you have specialist teams who respect each others’ abilities and recommendations. A dev team shouldn’t have to worry about standing up infrastructure, but similarly, when the infra team tells them that their app is consuming far more compute than it should be, the dev team should profile and improve their code instead of asking for more compute.

      • alexjplant 2 hours ago

        > there is a vanishingly small percentage of devs who want to do Ops, or even consider that a server with real constraints is running their code. Performance takes a backseat to DX, which AFAICT is code for “I don’t want to do anything but write code and push.”

        This is because many tech orgs judge performance by "business impact" and dev and infra teams have different value propositions.

        A dev writing and shipping features conspicuously demonstrates value because they write things that users pay for. Product features are novel and specific to a business; a developer can write "Built an automatic cardinal grammeter synchronizer that generates $100M in ARR" on their resume. There is essentially no upper bound on the dollar value of their impact.

        Infra teams, on the other hand, have no direct way of quantifying their work. Even though their job is very necessary it's difficult to move the needle as far as cashflow is concerned. People tend to consider infrastructure a commodity so it ends up being one of those things that's only noticed if it breaks. A good day on an ops team means no trouble, not more money. The cherry on top of the sundae is that more users is more money which means more prestige for developers whereas more users means more load which means more problems for the infra and ops teams to contend with.

        This incentive structure is why many devs would rather spend all of their time slinging code than worrying about k8s cluster nodes or IAM roles.

        As a jack-of-all-trades I don't think this a healthy dynamic. I've led infra teams and dev teams both and know what it looks like on each side of the fence - a little consideration goes a long way and it's one of many reasons why I think that it's healthy for everybody to do a bit of everything in a tech org. People are much less likely to "play defense" if they understand a team's motivations and don't anticipate a negative interaction.

  • bangertho 16 hours ago

    This already happened with Linux where people with no understanding of electrical engineering got jobs memorizing abstractions.

    I look forward to a data driven system future where a few functions transform the machines electromagnetic geometry to solve a task based upon the most efficient energy model for solving a task as we continue to compress from the model all the non-essential syntax sugar of modern software.

    https://arxiv.org/abs/2309.10668

    • sgarland 16 hours ago

      Fun fact, I was a Nuclear Electronics Technician, and have a decent understanding – though admittedly not to the level of an EE – of how computers work at a very low level. I also worked for a chip fab for a couple of years, and literally made silicon wafers. I also rewrote the US Navy’s microprocessor training course for Nuke ETs, updating it from the Motorola 68000 to the Intel 386.

      I make no claims that I fully understand anything, but I do have a decent understanding of how a CPU works from the level of doped silicon and up. Crucially, I read every doc I could find at every one of those jobs. You can learn enough to do the job, or you can learn more. That is a choice that everyone makes.

      More generally, I’ve been playing with Linux and computers in general for over 20 years, and when I finally got a job in tech about five years ago, I was stunned at how little people knew about how computers work. I don’t expect (nor do I think it’s helpful) anyone to know how a bus arbitration cycle works, but I assumed that things like IOPS and throughput would be generally understood.

      • nyarlathotep_ 15 hours ago

        > More generally, I’ve been playing with Linux and computers in general for over 20 years, and when I finally got a job in tech about five years ago, I was stunned at how little people knew about how computers work. I don’t expect (nor do I think it’s helpful) anyone to know how a bus arbitration cycle works, but I assumed that things like IOPS and throughput would be generally understood.

        My expertise is only in sleeping until 11am on weekends, but I too started in "tech" after being a lifelong hobbyist and have been continually shocked at how concepts like "pass by reference" are alien to a seemingly large portion of the people that I've worked with.

        People often fail to know things that are basically "table stakes" in the domains they ostensibly work in, to say nothing of even being aware of something like L1 cache or how code they write could interact with it.

        • sgarland 15 hours ago

          AWS: “We’ve separated compute and storage by a large physical distance, which makes relational databases better.”

          People who knew better: wild laughter.

      • binary132 15 hours ago

        It’s actually kinda terrifying the more you find out how incompetent most of the people writing most of the software most of the world runs on are

        • sgarland 15 hours ago

          Much like how the more you know about how computers work, the more mind-boggling it is that they work. The Internet? Magic. CDMA? Wizardry. EUV? Sorcery.

  • mountainriver 17 hours ago

    This is already happening and it’s a colossal mess

Aziell 16 hours ago

I used to work with someone like this. At first, he really wanted to do things properly. Over time, he gave up. Not because he was lazy, but because he felt like effort didn’t really matter.

Copilot’s fine for boilerplate. But lean on it too much, and you stop thinking. Stop thinking long enough, and you stop growing. That’s the real cost.

wwarner 18 hours ago

> AI has no concept of memory locality. No intuition for cache misses.

Not true at all but you have to ask it.

tptacek 18 hours ago

The real horror isn’t that AI will take our jobs—it’s that it will let people in who never wanted the job to begin with.

Gross. Also: you could have said this about the spreadsheet.

  • yencabulator 17 hours ago

    Meanwhile,

    > 88% of the Excel spreadsheets have errors

    https://www.cassotis.com/insights/88-of-the-excel-spreadshee...

    How many companies mismanaged their finances because they had an enthusiastic spreadsheet user in charge? From that article, we know a country did.

    • saulpw 2 hours ago

      I agree that all of this is madness, but to be fair, 100% of computer programs have bugs, so I don't think this is any more damning than systematizing all of humanity's processes with computers.

      • yencabulator 2 hours ago

        It's about the likelihood and probability distribution of the kinds of bugs.

        An Excel sheet made by NASA as part of their official space operations, to that rigor will probably have 0 problems ever. But compared to typical software, just consider this: Excel sheets do not have a culture of any kind of testing. If the output looks plausible, move on. Garbage in, garbage processing, garbage out.

        AI slop has a similar but not exactly the same problem: It's highly, highly likely a human "reviewing" the generated code will just nod their head and not think critically/independently, just accepting all things that are superficially plausible. That may be fine for your webdev CRUD but I sure hope those people are never given credentials that can access anything critical.

  • EdwardKrayer 17 hours ago

    I feel this has been the case for 20 years, when a whole generation was shoveled into the Comp Sci dream.

  • robocat 17 hours ago

    > let people in who never wanted the job to begin with

    I knew plenty of software developers that hate the job: it's a just paid work for many people and AI doesn't change that

wewewedxfgdf 16 hours ago

I want everyone to join the programming club who wants to.

  • sgarland 6 hours ago

    Wants is the operative word. If you show up to a racetrack with zero experience and a Ferrari, you’re not going to be accepted if you try to pretend that you know how to drive. You’ll probably also hit the wall, which hopefully would provide an opportunity for introspection.

    If you have an AI generate all your code for you, and then say that you made an app, you should expect people to push back.

    I want to be around people who like learning and knowing things for the sake of learning. AI is a tool, but if you rely on it too heavily (or really at all when starting out), you’re stifling your ability to truly learn.

dmitrygr 18 hours ago

Ouch! Right in the feels

-__---____-ZXyw 18 hours ago

I wonder if we'll look back on this period in a couple of years and feel a nostalgic fondness as we think of the fateful moment when people working in software were forced to pull the wool from their eyes and look at the fact that businesses really, really, really dislike losing huge amounts of money paying people to make the software their businesses completely depend on.

I mean, I'm guessing that's true. It'd make a lot of sense if they vehemently disliked that. It's hard to make sense of it all otherwise, really.

  • yencabulator 16 hours ago

    Businesses really, really, really would like to just have profit without any expenses. Ideally all of the money in the world as profit, please. And no taxes either. Just skip straight to splitting all the money in the world between the shareholders.

    If you think running the output of an LLM as a serverless function in some cloud is a good way to differentiate your business, build a moat, and make a profit, good luck!

  • monero-xmr 18 hours ago

    Non-technical business owners have always had deep anxiety about software development. They don’t understand it, it’s very expensive, timelines can explode, and a hack or leak can materially damage their business.

    A reasonably smart CEO can pretty much understand, in depth, every aspect of their business. But when it comes to tech, which is often the most essential part, they are left grasping, and must rely on the expertise of other people, and thus their destiny is not really in their control, other than by hiring the best they can and throwing money at R&D.

    The AI and the hype around it plays into their anxieties, and also makes them feel like they have control over the situation.

    In biotech, the Chief Scientific Officer (CSO) is often given much more authority in startups than the CTO in tech startups, I have noticed.

    • sanderjd 18 hours ago

      > A reasonably smart CEO can pretty much understand, in depth, every aspect of their business. But when it comes to tech, which is often the most essential part, they are left grasping, and must rely on the expertise of other people

      I honestly really don't understand why this would be the case. Software isn't more complicated than any of the other aspects of the business. I think a "reasonably smart" CEO could just ... learn how it works? if it's really so critical to their business.

      It's been a long time since I worked for a CEO who didn't understand software.

      • monero-xmr 16 hours ago

        If you run a trucking company, or a retail business, or a food company, etc. I believe you can understand to a fairly detailed level the logistics and “secret sauce” involved that makes the business tick, even if you are not the core employees operating with the skills and expertise.

        But if you are a non-technical CEO and your core business is, say, enterprise SaaS software, you don’t fundamentally understand what the heck is going on, and if you have a key deadline and blow it, don’t really understand why. So if a new VP says they can cut your costs dramatically by offshoring everything to India, etc., or replace half these expensive engineers with AI, it seems as plausible as anything else. Especially given the fawning press and hype, and salesmen pitching you all day.

        • sgarland 5 hours ago

          The part of this argument that doesn’t make sense to me is that you’d think any CEO would have a reasonably decent bullshit detector, but maybe since they have to shovel it out so much, they forget how to detect it in others.

    • ghaff 18 hours ago

      CTO can be a funny position at companies. It sometimes does mean head of engineering and responsible for technical direction at a pretty granular level. But it often also can mean being sort of the public face for the company's technology vision. I've definitely seen companies where the two are largely one and the same. I've also seen companies where the CTO was more the outward-facing vision person.

    • EdwardKrayer 18 hours ago

      If senior management feels their destiny is not in their control, they're doing it wrong. The best management don't always have the most expertise - but the good ones have an uncanny ability to know when they should defer, when/who to consult with, who to trust, and what to delegate.

lyu07282 18 hours ago

these tools have no understanding of clean architecture, they are like geeksforgeeks or w3cschools, decades old shitty "tutorials" written by amateurs for amateurs condensed into a chatbot. If you work on cleanly architectured code they can still be useful, they can see the patterns and usually perform better in my experience. But not in a million years will you get to that clean architecture by starting them off from scratch. Keep it turned off until you have a solid foundation would be my advice.

quantadev 16 hours ago

Despite that this post/blog about AI has some valid points, the truth of the matter is that a good experienced developer _can_ extract the power out of a Coding Agent to get 6 months worth of work done in 3 weeks, and even do it in a language he's never coded in before. AI just isn't AGI yet, so comparing it to a human developer doesn't make any sense, to me. On the other hand, I'd also say AI Coding Agents are "Superhuman" in their vast knowledge and code-writing abilities. This indeed does sound like a contradiction, but the world is nuanced enough for it not to be a contradiction, but just two things that are true in a nuanced way.

bobxmax 18 hours ago

[flagged]

  • beepbooptheory 18 hours ago

    Is the "relevance" of programmers really at stake? Isn't it more like "what programmers actually do"?

nsonha 17 hours ago

[flagged]

groby_b 17 hours ago

Not sure if he talked about photography, Desktop Publishing, spreadsheets, or some other labor-saving invention.

But what I heard over the din of whining was "It was hard for me, it should be hard for you". And... that's not how this or anything works. You get labor-saving stuff, you choose if you want to continue to solve hard problems, or if you want the same problems (which suddenly turned easy).

Yes, it's not perfect. Yes, you need to know how you use it, and misusing it causes horrible disfiguring incidents. Guess what, the same was true about C++. And C before it. And that new-fangled assembly stuff, instead of using blinkenlights like a real programmer. And computers instead of slide rules.

Up the complexity ladder we keep going.

  • sgarland 17 hours ago

    No, you missed the point entirely. The author was correctly pointing out that if you don’t struggle, you will not understand how things work. Claim otherwise all you want; centuries of pedagogy have proven this time and time again. You have to understand the fundamentals to grok the abstractions, and you have to fail to know why the successes worked.

elliotbnvl 18 hours ago

Sounds like it was written by ChatGPT, to be honest.

  • nalekberov 18 hours ago

    your comment is more likely to be written by ChatGPT than the blogpost.

    • vibeCoder007 17 hours ago

      AI writing emits less carbon than human writing.

vibeCoder007 18 hours ago

Clean code is only good for non-profit organization. Your paid to solve problem as fast as possible, not to code.

  • joks 17 hours ago

    For profit organizations usually care a hell of a lot about minimizing production issues, actually. What are you talking about?

  • boxed 18 hours ago

    You missed the point of the article. It's not about clean code. It's about working code.

focusgroup0 18 hours ago

Adapt or die. Keep up on industry trends and learn how to (responsibly) use the tools to be a better programmer, or be unnaturally selected out.

  • meesles 18 hours ago

    100% agree, but I bet there will be a quality blowback in software much like what we've seen at Boeing over the last few decades. I think there will be a premium on higher-quality software, but most companies don't need it

  • ShroudedNight 18 hours ago

    What does 'better' mean in this context? Commercial viability is important if one wants to eat, but doesn't seem to justify the smugness projected...

  • yencabulator 17 hours ago

    More like embrace the crap quality (and become a worse programmer) or move to a field that cares about quality.

  • douglasisshiny 17 hours ago

    Have something else write code for you to be a better programmer? Yeah.... no, that's not how it works

  • lyu07282 18 hours ago

    We will see, but I can also imagine the pile of shit you vibecoded around you collapsing on top of you.

    • tokioyoyo 18 hours ago

      Does it matter? The selling point of AI generated code is, eventually, you won't care what's happening in your code base and things just "work". It's like 95% of people who code (99%?) don't know what compilers do. Throw away what is collapsing, bring up new stuff, rinse and repeat.

      • joks 17 hours ago

        The selling point to you is that you can't fix problems, you just have to reimplement everything every time anything fails?

      • lyu07282 17 hours ago

        There were always people who only cared if it "just works", that part isn't exactly new. I think what we are saying is that good code matters beyond what you can understand or appreciate, until it blows your face off. This isn't something we could ever convince you of, and...

        > Throw away what is collapsing, bring up new stuff, rinse and repeat.

        hearing you trying to explain yourself makes us even more worried than before.

    • nalekberov 18 hours ago

      The response will be "we adapted to the trends, don't blame us, blame the machines"

noobermin 18 hours ago

Is the writing here intentionally bad? May be it's not my cup of tea but it's been a while since I've read something so cringey, it's difficult to finish.

  • fracus 18 hours ago

    Message aside, I thought it was well written comedy.

  • willseth 17 hours ago

    The main source of the cringe is that the writing is so in your face about how hard it’s trying while the author is clearly unaware.

  • ivape 18 hours ago

    I don’t know, this made me kinda laugh:

    Because if that programmer—if that thing, that CREATURE—walked into your stand-up in human form, typing half-correct garbage into your codebase while ignoring your architecture and disappearing during cleanup, you’d fire them before they could say "no blockers".

    The description of the greenfield project first engineer. They are gone before you know it, or at the very least at some point offer a “well I had no choice you see, they really wanted to release something”.

  • loloquwowndueo 18 hours ago

    I liked it. Spotted one grammar mistake though. I like even that - an ai doesn’t make that kind of mistake.

  • CSMastermind 18 hours ago

    Really? I enjoyed the writing style quite a bit.

holtkam2 17 hours ago

I loved this article. But for some reason my gut is telling it will age like milk. Can you imagine how effective these coding agents will be in, say, 2036? The concept of coding things by hand for the sake of higher quality will seem so outdated

EdwardKrayer 18 hours ago

I agree with this article. But, I do think it's important to understand that these tools do have value - especially when learning. I also think a lot of the issues raised will improve when we can increase context length. Googling problems, and having obsolete answers from 2007 also slow down progress, but we're not saying Google is worthless for serving those results.

These tools will get better, and they will eventually allow the best to extend their ability instead of both slowing them down and potentially encouraging bad practices. But it will take time, and increased context length. The world is full of people who don't care about best practice, and if that's all the task requires of them - keep on keeping on.