Is anyone actually getting anything out of AI assisted coding? I find AI autocomplete especially horrible, though the chat interface has been helpful in preventing me from actually learning obscure bash parameters.
In all seriousness though, 2024 DORA report shows that for every 25% of coders adopting AI coding within an organization, delivery throughput reduces by 1.5% and delivery stability reduces by a whopping 7.2%.
(you can sign up to get the report here[1], sorry I don't have a direct link to provide)
Coding is one thing that all these LLM companies crazy optimized for and yet its only marginally useful.
Dario Amodai calming yesterday that 90% of coding will be done by AI in next 3-6 months. These ppl need to be held accountable for this scam that they are brazenly and openly perpetrating. Seriously, why is no one saying anything, whats goin on. I feel disgusted by these ppl .
Dario Amodai deserves Elizabeth Holmes treatment for this fraud.
> These ppl need to be held accountable for this scam
Tech used to deliver incredible things, so people got used to bold claims becoming realities.
That has not been the case for along time now. Everything is evolving slowly step by step. But tech companies are valued based on the old infinite growth potential. So, CEOs lie to achieve that kind of valuation.
I agree that they seem to be committing fraud and they should be held accountable. But I am afraid that too much rich investors money is on this pyramid scheme. It is hard to take action when nobody in power wants to show that the emperor is naked.
It’s because they don’t use their products for actual work, have been having this experience with all AI in Microsoft products too, and in general software has a problem where the people making it don’t use their own products.
Given the valuation of these companies the upside of "just lieing" is unfortunately high.
For example taking a step back, it's crazy that people accept the idea of "AGI" (which drives the valuation partially) at face value without any evidence.
I would be shocked if there was any accountability though.
there is this weird addictive thing with AI coding. found myself just sending prompts over and over and hoping the next one would finally get it right. sending a prompt and going and doing something else, but mostly end up with a huge mess
it's even worse if you're paying for tokens because there's a sunk cost! it feels like if you just tweak the prompt a little and put in another quarter and pull the lever again, this time it'll put out a totally correct result
I think Cursor etc are great for rapid prototyping and cut down the dev cycle from months to days without the need for PMs , the scaling is where the expertise would come in... whether this methodology would work on a complex codebase is debatable, but for startups looking to go from 0 to 1, this is a boon
Yep, it's pure hyperbole. Maybe for generating starter templates I can 70% out of AI, but once I'm building on an established project, it's much more limited to 30%. This is due to large projects taking too much context for an LLM to clearly understand feature development.
Yep exactly. If what this person is claiming is true we would've seen major acceleration in number of long standing bugs/issues in an open source project like pytorch. I don't see any such thing.
Is this Addy Osmani person some sort of AI scamster to make these absurd claims. I cannot think of any other reason thats motivating him to write this.
I agree with you but still, I find that I'm getting more and more value with LLMs for coding. I've been pleasantly surprised with how easy it was to refactor my code for instance. And I think I'm not using it at the full potential yet.
But we need to check everything, so it doesn't save you from having the expertise. Unless you're willing to ship code you don't understand and you won't be able to fix if it breaks.
You're missing that the pressure to move to AI assisted coding to destroy the wages of programmers is coming from the top down. The focus on AI for programming is both to reduce the demand for software engineers and to make those remaining uncertain about their jobs because the suits are sick of paying current market rate.
Is anyone actually getting anything out of AI assisted coding? I find AI autocomplete especially horrible, though the chat interface has been helpful in preventing me from actually learning obscure bash parameters.
In all seriousness though, 2024 DORA report shows that for every 25% of coders adopting AI coding within an organization, delivery throughput reduces by 1.5% and delivery stability reduces by a whopping 7.2%.
(you can sign up to get the report here[1], sorry I don't have a direct link to provide)
[1]: https://cloud.google.com/devops/state-of-devops
Coding is one thing that all these LLM companies crazy optimized for and yet its only marginally useful.
Dario Amodai calming yesterday that 90% of coding will be done by AI in next 3-6 months. These ppl need to be held accountable for this scam that they are brazenly and openly perpetrating. Seriously, why is no one saying anything, whats goin on. I feel disgusted by these ppl .
Dario Amodai deserves Elizabeth Holmes treatment for this fraud.
> These ppl need to be held accountable for this scam
Tech used to deliver incredible things, so people got used to bold claims becoming realities.
That has not been the case for along time now. Everything is evolving slowly step by step. But tech companies are valued based on the old infinite growth potential. So, CEOs lie to achieve that kind of valuation.
I agree that they seem to be committing fraud and they should be held accountable. But I am afraid that too much rich investors money is on this pyramid scheme. It is hard to take action when nobody in power wants to show that the emperor is naked.
It’s because they don’t use their products for actual work, have been having this experience with all AI in Microsoft products too, and in general software has a problem where the people making it don’t use their own products.
Given the valuation of these companies the upside of "just lieing" is unfortunately high.
For example taking a step back, it's crazy that people accept the idea of "AGI" (which drives the valuation partially) at face value without any evidence.
I would be shocked if there was any accountability though.
there is this weird addictive thing with AI coding. found myself just sending prompts over and over and hoping the next one would finally get it right. sending a prompt and going and doing something else, but mostly end up with a huge mess
it's the slot machine effect, plain and simple.
it's even worse if you're paying for tokens because there's a sunk cost! it feels like if you just tweak the prompt a little and put in another quarter and pull the lever again, this time it'll put out a totally correct result
I think Cursor etc are great for rapid prototyping and cut down the dev cycle from months to days without the need for PMs , the scaling is where the expertise would come in... whether this methodology would work on a complex codebase is debatable, but for startups looking to go from 0 to 1, this is a boon
AI isn't "astonishingly good" at 70% of things. I have to manually check everything it produces and 90% of time it gets something wrong.
I feel like i am living in an alternate reality than these ppl. wtf am i am i missing here. so frustrating reading these sorts of articles.
Yep, it's pure hyperbole. Maybe for generating starter templates I can 70% out of AI, but once I'm building on an established project, it's much more limited to 30%. This is due to large projects taking too much context for an LLM to clearly understand feature development.
Yep exactly. If what this person is claiming is true we would've seen major acceleration in number of long standing bugs/issues in an open source project like pytorch. I don't see any such thing.
Is this Addy Osmani person some sort of AI scamster to make these absurd claims. I cannot think of any other reason thats motivating him to write this.
I agree with you but still, I find that I'm getting more and more value with LLMs for coding. I've been pleasantly surprised with how easy it was to refactor my code for instance. And I think I'm not using it at the full potential yet.
But we need to check everything, so it doesn't save you from having the expertise. Unless you're willing to ship code you don't understand and you won't be able to fix if it breaks.
You're missing that the pressure to move to AI assisted coding to destroy the wages of programmers is coming from the top down. The focus on AI for programming is both to reduce the demand for software engineers and to make those remaining uncertain about their jobs because the suits are sick of paying current market rate.
> destroy the wages of programmers is coming from the top down
True, we're only cost to them. They can't wait to get rid of us, and we're building the tools for them to do so.
I think these sort of articles are CTO fodder, i have friends who have their company mandate that "all coding should now use AI " .