The number of use cases for which I use AI is actually rapidly decreasing. I don't use it anymore for coding, I don't use it anymore for writing, I don't use it anymore for talking about philosophy, etc.
And I use 0 agents. even though I am (was) the author of multiple MCP servers.
It's just all too brittle and too annoying. I feel exhausted when talking to much to those "things".... I am also so bored of all those crap papers being published about LLM. Sometimes, there are some gems but its all so low-effort. LLM papers bore the hell out of me...
Anyway, By cutting out AI for most of my stuff, I really improved my well-being.
I found the joy back in manual programming, because I am one of the few soon that will actually understand stuff :-).
I found the joy in writing with a fountain pen in a notebook and since then, I retain so much more information. Also a great opportunity for the future, when the majority will be dumbed down even more.
And for philosophical interaction. I joined an online University and just read the actual books of the great thinkers and discuss them with people and knowledgable teachers.
For what I use AI still is to correct my sentences (sometimes) :-).
It's kinda the same than when I cut all(!) Social Media a while ago. It was such a great feeling to finally get rid ot all those mind-screwing algorithms.
I don't blame anyone if they use AI. Do what you like.
> Typewriters and printing presses take away some, but your robot would deprive us of all. Your robot takes over the galleys. Soon it, or other robots, would take over the original writing, the searching of the sources, the checking and crosschecking of passages, perhaps even the deduction of conclusions. What would that leave the scholar? One thing only, the barren decisions concerning what orders to give the robot next!
From Issac Asimov. Something I have been contemplating a lot lately.
I started to code with them when Cursor came out. I've built multiple projects with Claude and thought that this is the freaking future.
Until all joy disappeared and I began to hate the whole process. I felt like I didn't do anything meaningful anymore, just telling a stupid machine what I want and let it produce very ugly output.
So a few months, I just stopped. I went back to VIM even....
I am pretty idealistic coder, who always thought of it as an art in itself. And using LLMs robbed me of the artistic aspect of actually creating something. The process of creating is what I love and like and what gives me inspiration and energy to actually do it. When a machine robs me of that, why would I continue to do it? Money then being the only answer... A dreadful existence.
I am not a Marxist, probably bceause I don't really understand him, but I think LLM is "detachment of work" applied to coders IMHO. Someone should really do a phenomenological study on the "Dasein" of a coder with LLM.
Funnily, I don't see any difference in productivity at all. I have my own company and I still manage to get everything done on deadline.
I'll need to read more about this ("Dasein") as I was not aware of it. Yesterday our "adoptive" family had a very nice Thanksgiving, and we were considered youngesters (close to our 50s) among our hosts & guests and this came multiple times when we were discussing AI among many other things - "The joy of work", the "human touch", etc. I usually don't fall for these "nice feel" talks, but now that you mentioned this it hit me. What would I do if something like AI completely replace me (if ever).
I cannot talk for OP, but I have been researching ways to make ML models learn faster, which obviously is a path that will be full of funny failures. I'm not able to use ChatGPT or Gemini to edit my code, because they will just replace my formulas with SimCLR and call it done.
That's it, these machines don't have an original thought in there. They have a lot of data so they seem like they know stuff, they clearly know stuff you don't.But go off the beaten path and they gently but annoyingly try to steer you back.
And that's fine for some things. Horrible if you want to do non-conventional things.
This is also my experience with (so called) AI. Coding with AI feels like working with a dumb colleague that constantly forgets. It feels so much better to manually write code.
I liken it to a drug that feels good over the near term but has longer term impacts.. sometimes you have to get things out of your system. It's fun while it lasts and then the novelty wears off. (And just as some people have the tolerance to do drugs for much longer periods of time than others, I think the same is the case for AI)
I technically use it for programming, though really for two broad things:
* Sorting. I have never been able to get my head around sorting arrays, especially in the Swift syntax. Generating them is awesome.
* Extensions/Categories in Swift/Objective C. "Write me an extension to the String class that will accept an array of Int8s as an argument, and include safety checks." Beautiful.
That said I don't know why you'd use it for anything more. Sometimes I'll have it generate like, the skeleton of something I'm working on, a view controller with X number of outlets of Y type, with so and so functions stubbed in, but even that's going down because as I build I realize my initial idea can be improved.
I've been using LLMs as calculators for words, like they can summarize, spot, correct, but often can be wrong about this - especially when I have to touch language I haven't used in a while (Python, Powershell, Rust as recent examples), or sub-system (SuperPrefetch on WIndows, Or why audio is dropping on coworker's machines when they run some of the tools, and like this... don't ask me why), and all kinds of obscure subjects (where I'm sure experts exists, but when you need them they are not easy (as in "nearby") to reach for, and even then might not help)
But now my grain of salt has increased - it's still helpful, but much like a real calculator - there is limit (in precision), and what it can do.
For one it still can't make good jokes :) (my litmus test)
The economics of the force multiplier is too high to ignore, and I’m guessing an SWEs who don’t learn how to use it consistently and effectively will be out of the job market in 5 or so years.
It’s the opposite. The more you know to do without them the more employable you are. AI has no learning curve, not at the current level of complexity anyway. So anyone can pick it up in 5 years and if you’ve used it less your brain is better.
Ya, you have to shape your code base, not just that but get your AI to document your code base and come with some sort of pipeline to have different AI check things.
It’s fine to be skeptical, and I definitely hope I’m wrong, but it really is looking bad for SWEs who don’t start adopting at this point. It’s a bad bet in my opinion, at least have your F-u money built up in 5 if you aren’t going full in on it.
Back in the early 2000s the sentiment was that IDEs were a force multiplier that was too high to ignore, and that anyone not using something akin to Visual Studio or Eclipse would be out of a job in 5 or so years. Meanwhile, 20 years later, the best programmers you know are still using Vim and Emacs.
As someone that uses vim full time all that happened is people started porting all the best features of IDEs over to vim/emacs as plugins. So those people were right it's just the features flowed.
Pretty sure you can count the number of professional programmers using vanilla vim/neovim on one hand.
It depends where you work. In gaming, the best programmers I know might not even touch the command-line / Linux, and their "life" depens on Visual Studio... Why? Because the eco-system around Visual Studio / Windows and how game console devkits work is pretty much tied - while Playstation is some kind of BSD, and maybe Nintendo - all their proper SDKs are just for Windows and tied around Visual Studio (there are some studios that are the exceptions, but rare).
I'm sure other industries would have their similar examples. And then the best folks in my direct team (infra), much smaller - are the command-line, Linux/docker/etc. guys that use mostly VSCode.
But the vast majority are still using an IDE - and I say this as someone who has adamantly used Vim with plugins for decades.
Something similar will happen with agentic workflows - those who aren't already productive with the status quo will have to eventually adopt productivity enhancing tooling.
That said, it isn't too surprising if the rate of AI adoption starts slowing down around now - agentic tooling has been around for a couple years now, so it makes sense that some amount of vendor/tool rationalization is kicking in.
It remains to be seen whether these tools are actually a net enhancement to productivity, especially accounting for longer-term / bigger-picture effects -- maintainability, quality assurance, user support, liability concerns, etc.
If they do indeed provide a boost, it is clearly not very massive so far. Otherwise we'd see a huge increase in the software output of the industry: big tech would be churning out new products at a record rate, tons of startups would be reaching maturity at an insane clip in every imaginable industry, new FOSS projects would be appearing faster than ever, ditto with forks of existing projects.
Instead we're getting an overall erosion of software quality, and the vast majority of new startups appear to just be uninspired wrappers around LLMs.
I'm not necessarily talking about AI code agents or AI code review (workflows which I think are difficult for agentic workflows to really show a tangible PoV against humans, but I've seen some of my portfolio companies building promising capabilities that will come out of stealth soon), but various other enhancements such as better code and documentation search, documentation generation, automating low sev ticket triage, low sev customer support, etc.
In those workflows and cases where margins and dollar value provided is low, I've seen significant uptake of AI tooling where possible.
Even reaching this point was unimaginable 5 years ago, and is enough to show workflow and dollar value for teams.
To use another analogy, using StackOverflow or Googling was viewed derisively by neckbeards who constantly spammed RTFD back in the day, but now no developer can succeed without being able to be a proficient searcher. And a major value that IDEs provided in comparison to traditional editors was that kind of recommendation capability along with code quality/linting tooling.
Concentrating on abstract tasks where the ability to benchmark between human and artificial intelligence is difficult means concentrating on the trees while missing the forest.
I don't foresee codegen tools replacing experienced developers but I do absolutely see them reducing a lot of ancillary work that is associated with the developer lifecycle.
> I've seen significant uptake of AI tooling where possible.
Uptake is orthogonal to productivity gain. Especially when LLM uptake is literally being forced upon employees in many companies.
> I do absolutely see them reducing a lot of ancillary work that is associated with the developer lifecycle.
That may be true! But my point is they also create new overhead in the process, and the net outcome to overall productivity isn't clear.
Unpacking some of your examples a bit --
Better code and documentation search: this is indeed beneficial to productivity, but how is it an agentic workflow that requires individual developers to adopt and become productive with, relative to the previous status quo?
Documentation generation: between the awful writing style and the lack of trustworthiness, personally I think these easily reduce overall productivity, when accounting for humans consuming the docs. Or in the case of AI consuming docs written by other AI, you end up with an ever-worsening cycle of slop.
Automating low sev ticket triage: Potentially beneficial, but we're not talking about a revolutionary leap in overall team/org/company productivity here.
Low sev customer support: Sounds like a good way to infuriate customers and harm the business.
I think no one can predict what will happen. We need to wait until we can empirically observe who will be more productive on certain tasks.
Thats why I started with AI coding. I wanted to hedge against the possibility that this takes off and I am useless. But it made me sad as hell and so I just said: Screw it. If this is the future, I will NOT participate.
That’s fine, but you don’t want to be blind sided by changes in the industry. If it’s not for you, have a plan B career lined up so you can still put food on the table. Also, if you are good at old fashioned SE and AI, you’ll be OK either way.
Adoption rate is not derivative of Adoption. Rate of change is. Adoption rate is the percentage of uptake (there, same order with Adoption itself). It being flattening means first derivative is getting close to 0.
It maps pretty cleanly to the well understood derivatives of a position vector. Position (user count), velocity (first derivative, change in user count over time), acceleration (second derivative, speeding up or flattening of the velocity), and jerk (third derivative, change in acceleration such as the shift between from acceleration to deceleration)
It is a beautiful title and a beautiful way to think about it—alas, I think gp is right: here, from the charts anyway, the writer seems to mean the count of firms reporting adoption (as a proportion of total survey respondents).
Which paints a grimmer picture—I was surprised that they report a marked decline in adoption amongst firms of 250+ employees. That rate-as-first-derivative apparently turned negative months ago!
Then again, it’s awfully scant on context: does the absolute number of firms tell us much about how (or how productively) they’re using this tech? Maybe that’s for their deluxe investors.
It is not velocity, it is not change. Have you read the graphs? What do you think 12% in Aug and Sep for 250+ Employee companies mean, that another 12% of companies adopted AI or is it a flat "12% of the companies have adopted in Aug, and it did not change in Sep"
Yes. The title specifically is beautiful. The charts aren't nearly as interesting, though probably a bit more than a meta discussion on whether certain time intervals align with one interpretation of the author's intent or another.
Is it your assertion that an 'infinite' percentage! of the businesses will use AI on a long enough time scale?
If you need everything to be math, at least have the courtesy to use the https://en.wikipedia.org/wiki/Logistic_function and not unbounded logarithmic curves when referring to on our very finite world.
If you mean with respect to time, wrong. The denonimator in adoption rate that makes it a “rate” is the number of existing businesses, not time. It is adoption scaled to the universe of businesses, not the rate of change of adoption over time.
One could try to make an argument that "adoption rate" should mean change in adoption over time, but the meaning as used in this article is unambiguously not that. It's just percentages, not time derivatives, as clearly shown by the vertical axis labels.
I don’t understand, how can adoption rate change overnight if its derivative is negative? Trying to draw a parallel to get intuition, if adoption is distance, adoption rate speed, and the derivative of adoption rate is acceleration, then if I was pedal to the floor but then release the pedal and start braking, I’ll not lose the distance gained (adoption) but my acceleration will flatten then get negative and my speed (adoption rate) will ultimately get to 0 right? Seems pretty significant for an industry built on 2030 projections.
One announcement from a company or government can suddenly change the derivative discontinuously.
Derivatives irl do not follow the rules of calculus that you learn in class because they don't have to be continuous. (you could quibble that if you zoom in enough it can be regarded as continuous.. But you don't gain anything from doing that, it really does behave discontinuous)
I don't understand your point. It seemed like the person I was replying to didn't understand how both claims could be simultaneously true so I was elaborating.
Not sure what kinda calculus you took at least here in the states it's very standard to learn about such functions in class, and yes there is a difference between discontinuous and the slope being really large (though finite) for a brief period of time
You rarely study delta and step functions in an introductory calculus class. In this case the first derivative would be a step function, in the sense that over any finite interval it appears to be discontinuous. Since you can only sample a function in reality there's no distinguishing the discontinuous version from its smooth approximation.
(I suppose a rudimentary version of this is taught in intro calc. It's been a long time so I don't really remember.)
I'm sure it depends on who's teaching the class and what curriculum they follow, but we were doing piecewise linear functions well before differentiation so I think I do actually disagree as per your caveat. It's also possible that the courses triaged different material. As a calc for engineers not calc for math majors taker, my experience may have been heavier on deltas and steps.
Derivatives in actual calculus don’t have to be continuous either. Consider the function defined by f(x) = x^2 sin(1/x) for x != 0; f(0) = 0.
The derivative at 0 exists and is 0, because lim h-> 0 (h^2 sin(1/h))/h = lim h-> 0 (h sin(1/h)), which equals 0 because the sin function is bounded.
When x !=0, the derivative is given by the product and chain rules as 2x sin(1/x) - cos(1/x), which obviously approaches no limit as x-> 0, and so the derivative exists but is discontinuous.
Looking at the graphs in the linked article, a more accurate title would probably be "AI adoption has stagnated" - which a lot of people are going to care about.
Corporate AI adoption looks to be hitting a plateau, and adoption in large companies is even shrinking. The only market still showing growth is companies with fewer than 5 employees - and even there it's only linear growth.
Considering our economy is pumping billions into the AI industry, that's pretty bad news. If the industry isn't rapidly growing, why are they building all those data centers? Are they just setting money on fire in a desperate attempt to keep their share price from plummeting?
When all the dust settles, I think it's probably going to be the biggest bubble ever. The unjustified hype is unbelievable.
For some reason I can't even get Claude Code (Running GLM 4.6) to do the simplest of tasks today without feeling like I want to tear my hair out, whereas it used to be pretty good before.
They are all struggling mightily with the economics, and I suspect after each big announcement of a new improved model x.y.z where they demo shiny so called advancement, all the major AI companies heavily throttle their models in use to save a buck.
At this point I'm seriously considering biting the bullet and avoiding all use of AI for coding, except for research and exploring codebases.
First it was Bitcoin, and now this, careening from one hyper-bubble to a worse one.
I think it might be answering long-term questions about direct chat use of AIs. Of course as AI goes through its macroscopic changes the amount it gets used for each person will increase, however some will continue to avoid using AI directly, just like I don't fully use GPS navigation but I benefit from it whether I like it or not when others are transporting me or delivering things to me.
They show two different surveys that are supposed to show the same underlying truth but differ by a factor of 3x? For the Ramp survey: why the sudden jump from 30% to 50% in March? For the Census one: How could it possibly be that only 12% of companies with more than 250 people „adopted“ (whatever that means) AI? It would be interesting if it were true but these charts don’t make any sense at all to me
The Census Bureau asks if firms are using AI "to help produce goods or services". I guess that's intended to exclude not-yet-productive investigations, and maybe also indirect uses--does LLM-powered OCR for the expense reports for the travelling sales representatives for a widget factory count? That's all vague enough that I guess it works mostly as a sentiment check, where the absolute value isn't meaningful but the time trend might be.
The Ramp chart seems to use actual payment information from companies using their accounting platform. That should be more objective, though they don't disclose much about their methodology (and their customers aren't necessarily representative, the purpose and intensity of use aren't captured at all, etc.).
> The Census Bureau asks if firms are using AI "to help produce goods or services"
That's odd. I use AI tools at work occasionally, but since our business involves selling physical goods, I guess we would not count as an AI adopter in this survey.
My guess is AI will find niches where it provides productivity boosts, but won’t be as useful in the majority of fields. Right now, AI works pretty well for coding, and doesn’t really excel anywhere else. It’s not looking like it will get good enough to disrupt the economy at large.
Aside from financially-motivated "testimonials," there's no broad evidence that it even works that well for coding, with many studies even showing the opposite. Damning with faint praise.
I know JavaScript on a pretty surface level, but I can use Claude to wire up react and tailwind, and then my experience with all the other programming I’ve done gives me enough intuition to clean it up. That helps me turn rough things into usable tools that can be reused or deployed in small scale.
That’s a productivity increase for sure.
It has not helped me with the problems that I need to spend 2-5 days just thinking about and wrapping my head around solutions to. Even if it does come up with solutions that pass tests, they still need to be scrutinized and rewritten.
But the small tasks it’s good at add up to being worth the price tag for a subscription.
Do you feel like you begin to _really_ understand React and Tailwind? Major tools that you seem to use now.
Do you feel that you will become so well-versed in it that you will be able to debug weird edge cases in the future?
Will you be able to reason about performance? Develop deep intuition why pattern X doesn't work for React but pattern Y does. etc?
I personally learned for myself that this learning is not happening. My knowledge of tools that I used LLMs for stayed pretty superficial. I became dependent on the machine.
I think what’s clear is many people feel much more productive coding with LLMs, but perceived and actual productivity don’t necessarily correlate. I’m sure results vary quite a bit.
My hunch is that long term value might be quite low: a few years into vibe coding huge projects, developers might hit a wall with a mountain of slop code they can no longer manage or understand. There was an article here recently titled “vibe code is legacy code” which made a similar argument. Again, results surely vary wildly
It feels like it's creating economic activity in the tech sector the same way that walking down the street and smashing everyone's windshields would create economic activity for local auto shops.
It's just printing headlines out of nothing. If it tried to answer why the two graphs show such different numbers (one ~14%, the other ~55%) I'd be more interested.
> Note: Data is six-survey moving average. The survey is conducted bi-weekly. Sources: US Census Bureau, Macrobond, Apollo Chief Economist
> Note: Ramp Al Index measures the adoption rate of artificial intelligence products and services among American businesses. The sample includes more than 40,000 American businesses and billions of dollars in corporate spend using data from Ramp’s corporate card and bill pay platform. Sources: Ramp, Bloomberg, Macrobond, Apollo Chief Economist
It seems that the real interesting thing to see here is that the companies using Ramp are extremely atypical.
Three consecutive months of decline starts to look more like a trend. Unless you think there's a transient issue causing the decline, something fundamental has changed
Again: compare early 2024. And that’s not the only thing; the second chart shows a possible flattening, but by no means certain yet, especially not when taken with the clear March–April jump; and the first chart shows no dwindling in 1–4, and clear recovery in 250+. The lie is easily put to the claim the article makes:
> Data from the Census Bureau and Ramp shows that AI adoption rates are starting to flatten out across all firm sizes, see charts below.
It’s flat-out nonsense, and anyone with any experience in this kind of statistics can see it.
Especially interesting is the adoption by the smallest companies. This means people find it still increasingly useful at the grassroot level where things are actually done.
At larger companies adoption will probably stop at the level where managers will start to be threatened.
But what does that grassroot adoption look like in practice? Is that a developer spending $250/month on Claude, or is it a local corner shop using it once a month to replace their clip art flyer with AI slop, and the example contract they previously found via Google with some legalese gobbledygook ChatGPT hallucinated?
Giving AI away for free to people who don't give a rat's ass about the quality of its output isn't very difficult. But that's not exactly going to pay your datacenter bill...
What is their definition of adoption? A company where every employee has some level of access to AI is the bare minimum of “full adoption” for a given company but a threadbare one.
A company that has implemented most current AI technologies in their applicable areas in known-functionally capabilities? That is a vastly larger definition of Full Adoption.
It's the different between access and full utilization. The gulf is massive. And I'm not aware of any major company, or really any, that have said, "yep, we're done, we're doing everything we think we can with AI and we're not going to try to improve upon it."
Implementation of acquired capabilities, implementations... Very early days. And it appears this study's definition is more like user access, not completed implementations. Somewhat annoyingly, I receive 3 or 4 calls a day, sometimes on weekends, from contracting firms looking for leads, TPMs, ML/Data scientists with genai / workflow experience. 3 months ago, without having done anything to put my name out any more that however it had been found before that, I was only getting 1 ever day or two.
I don't think this study is using a useful definition for what they intend to measure. It is certainly not capturing more than a fraction of activity.
Does it really matter? In this case, it is the perception that matters. If companies feel that AI is not quite as helpful as they thought it might, even if they have not maxed out what they theoretically could do with it, then that is all that matter in trying to get sense of where this might go
If I was openAI or whatever I would be investing in circular partnerships with claude or whatever, claim agentic use should be considered the same as real users, then have each other's LLM systems use each other and finally achieve infinite, uncapped user growth
From the chart, the percentage of companies using AI has been going down over the past couple of months
That's a massive deal because the AI companies today are valued on the assumption that they'll 10x their revenue over the next couple of years. If their revenue growth starts to slow down, their valuations will change to reflect that
This bubble phase will play out just as the previous have in tech: consolidation, most of the value creation will go to a small group of companies. Most will die, some will thrive.
Companies like Anthropic will not survive as an independent. They won't come close to having enough revenue & profit to sustain their operating costs (they're Lyft to Google or OpenAI's Uber, Anthropic will never reach the scale needed to roll over to significant profit generation). Its fair value is 1/10th or less what it's being valued at currently (yes because I say so). Anthropic's valuation will implode to reconcile that, as the market for AI does. Some larger company will scoop them up during the pain phase, once they get desperate enough to sell. When the implosion of the speculative hype is done, the real value creation will begin thereafter. Over the following two or three decades a radical amount of value will be generated by AI collectively, far beyond anything seen during this hype phase. A lot of lesser AI companies will follow the same path as Anthropic.
The least volatile dataset, employee count 1-4 businesses, is steadily climbing in adoption. I feel like as long as the smallest businesses (so the most agile, non-enterprise software ones) increase in adoption, other sizes will follow.
Not to be lost, but the first chart is actually a 3-month moving average. Surprised they buried that in the notes and didn't simply include it in the chart title. "Note: Data is six-survey moving average. The survey is conducted bi-weekly. Sources: US Census Bureau, Macrobond, Apollo Chief Economist"
Without weighing in on the accuracy of this claim, this would be an expected part of the maturity cycle.
Compare to databases. You could probably have plotted a chart of database adoption rates in the '90s as small companies started running e.g. Lotus Notes, FoxPro and SQL server everywhere to build in-house CRMs and back-office apps. Those companies still operate those functions, but now most small businesses do not run databases themselves. Why manage SQL Server when you can just pay for Salesforce and Notion with predictable monthly spend?
(All of this is more complex, but analogous at larger companies.)
My take is the big rise in AI adoption, if it arrives, will similarly be embedded inside application functions.
People push back against comments like these. But, as you suggest, the win isn't about individual developers potentially increasing their productivity by some inflated amount. It's about baking more prediction and automation into more tools that people who aren't developers use. Which is probably part of where the general meme of lack of interest in entry level programmers come from.
Actually surprising when programmers (especially) push back. A couple years ago, people were doing copy/paste from ChatGPT to their IDEs. Now, they generally work at a higher level of abstraction in dedicated tools like Coded or Cursor. Why would other functions prefer the copy/paste lifestyle?
I think what is happening is that people are realizing AI is not just plug and play. It can do amazing things but needs engineering around it.
I think what will happen is in parallel more products will be built that address the engineering challenges and the models will keep getting better. I don't know though if that will lead to another hockey stick or just slow and steady.
I'm more interested in what the implications are for the economy and what this next AI winter looks like.
What happens to all the debt? Was all this just for chatbots that are finally barely good enough for satnav and image gen that does slightly better photoshop that the layperson can use?
2. It supposedly plots a “rate”, but the time interval is unspecified. Per second? Per month? Per year? Intuitively my best guess is that the rate is per-year. However that would imply the second plot believes we are very near to 100% adoption, which I think we know is false. So what is this? Some esoteric time interval like bi-yearly?
3. More likely, it is not a rate at all, but instead a plot of total adoption. In this case, the title is chosen _very_ poorly. The author of the plot probably doesn’t know what they’re looking at.
4. Without grid lines, it’s very hard to read the data in the middle of the plot.
There is no need for a 'rate' to be against a time interval. The conversion rate of an email is purchases / emails sent. The fatality rate of a disease is casualties / people infected. It really just means a ratio.
The average person has no idea what to use AI for to get substantial value out of what it can now do.
It's the switch between: know which service to use, consider capabilities, try to get AI to do a thing, if you even have a thing that needs done that it can do; versus: AI just does a thing for you, requiring little to no thought. Very active vs very passive. Use will go up in direct relation to that changeover. The super users are already at peak, they're fully engaged. A software developer wants a very active relationship with their AI; Joe Average does not.
The complexity has to vanish entirely. It's the difference between hiding the extraordinary engineering that is Google search behind a simple input box, and making users select a hundred settings before firing off a search. Imagine if the average search user needed to know something meaningful about the capabilities of Google search or search in general, before using it. Prime Google search (~1998-2016) obliterated the competition (including the portals) with that one simple search box, by shifting all the complexity to the back-end; they made it so simple the user really couldn't screw anything up. That's also why ChatGPT got so far so fast: input box, type something, complexity mostly hidden.
The number of use cases for which I use AI is actually rapidly decreasing. I don't use it anymore for coding, I don't use it anymore for writing, I don't use it anymore for talking about philosophy, etc. And I use 0 agents. even though I am (was) the author of multiple MCP servers. It's just all too brittle and too annoying. I feel exhausted when talking to much to those "things".... I am also so bored of all those crap papers being published about LLM. Sometimes, there are some gems but its all so low-effort. LLM papers bore the hell out of me...
Anyway, By cutting out AI for most of my stuff, I really improved my well-being. I found the joy back in manual programming, because I am one of the few soon that will actually understand stuff :-). I found the joy in writing with a fountain pen in a notebook and since then, I retain so much more information. Also a great opportunity for the future, when the majority will be dumbed down even more. And for philosophical interaction. I joined an online University and just read the actual books of the great thinkers and discuss them with people and knowledgable teachers.
For what I use AI still is to correct my sentences (sometimes) :-).
It's kinda the same than when I cut all(!) Social Media a while ago. It was such a great feeling to finally get rid ot all those mind-screwing algorithms.
I don't blame anyone if they use AI. Do what you like.
> Typewriters and printing presses take away some, but your robot would deprive us of all. Your robot takes over the galleys. Soon it, or other robots, would take over the original writing, the searching of the sources, the checking and crosschecking of passages, perhaps even the deduction of conclusions. What would that leave the scholar? One thing only, the barren decisions concerning what orders to give the robot next!
From Issac Asimov. Something I have been contemplating a lot lately.
> I don't use it anymore for coding
I'm curious, can you expand on this? Why did you start using coding agents, and why did you stop?
I started to code with them when Cursor came out. I've built multiple projects with Claude and thought that this is the freaking future. Until all joy disappeared and I began to hate the whole process. I felt like I didn't do anything meaningful anymore, just telling a stupid machine what I want and let it produce very ugly output. So a few months, I just stopped. I went back to VIM even....
I am pretty idealistic coder, who always thought of it as an art in itself. And using LLMs robbed me of the artistic aspect of actually creating something. The process of creating is what I love and like and what gives me inspiration and energy to actually do it. When a machine robs me of that, why would I continue to do it? Money then being the only answer... A dreadful existence.
I am not a Marxist, probably bceause I don't really understand him, but I think LLM is "detachment of work" applied to coders IMHO. Someone should really do a phenomenological study on the "Dasein" of a coder with LLM.
Funnily, I don't see any difference in productivity at all. I have my own company and I still manage to get everything done on deadline.
I'll need to read more about this ("Dasein") as I was not aware of it. Yesterday our "adoptive" family had a very nice Thanksgiving, and we were considered youngesters (close to our 50s) among our hosts & guests and this came multiple times when we were discussing AI among many other things - "The joy of work", the "human touch", etc. I usually don't fall for these "nice feel" talks, but now that you mentioned this it hit me. What would I do if something like AI completely replace me (if ever).
Thank you, and sorry my thoughts are all over...
> let it produce very ugly output.
Did you try changing your prompts?
I cannot talk for OP, but I have been researching ways to make ML models learn faster, which obviously is a path that will be full of funny failures. I'm not able to use ChatGPT or Gemini to edit my code, because they will just replace my formulas with SimCLR and call it done.
That's it, these machines don't have an original thought in there. They have a lot of data so they seem like they know stuff, they clearly know stuff you don't.But go off the beaten path and they gently but annoyingly try to steer you back.
And that's fine for some things. Horrible if you want to do non-conventional things.
This is also my experience with (so called) AI. Coding with AI feels like working with a dumb colleague that constantly forgets. It feels so much better to manually write code.
I commend you for your choices. This is the way in the 2020s.
This is the best take
I liken it to a drug that feels good over the near term but has longer term impacts.. sometimes you have to get things out of your system. It's fun while it lasts and then the novelty wears off. (And just as some people have the tolerance to do drugs for much longer periods of time than others, I think the same is the case for AI)
I technically use it for programming, though really for two broad things:
* Sorting. I have never been able to get my head around sorting arrays, especially in the Swift syntax. Generating them is awesome.
* Extensions/Categories in Swift/Objective C. "Write me an extension to the String class that will accept an array of Int8s as an argument, and include safety checks." Beautiful.
That said I don't know why you'd use it for anything more. Sometimes I'll have it generate like, the skeleton of something I'm working on, a view controller with X number of outlets of Y type, with so and so functions stubbed in, but even that's going down because as I build I realize my initial idea can be improved.
I've been using LLMs as calculators for words, like they can summarize, spot, correct, but often can be wrong about this - especially when I have to touch language I haven't used in a while (Python, Powershell, Rust as recent examples), or sub-system (SuperPrefetch on WIndows, Or why audio is dropping on coworker's machines when they run some of the tools, and like this... don't ask me why), and all kinds of obscure subjects (where I'm sure experts exists, but when you need them they are not easy (as in "nearby") to reach for, and even then might not help)
But now my grain of salt has increased - it's still helpful, but much like a real calculator - there is limit (in precision), and what it can do.
For one it still can't make good jokes :) (my litmus test)
No one uses agents. They're a myth that Marc Benioff willed into existence. No one who regularly uses LLMs would ever trust one to do unattended work.
The economics of the force multiplier is too high to ignore, and I’m guessing an SWEs who don’t learn how to use it consistently and effectively will be out of the job market in 5 or so years.
It’s the opposite. The more you know to do without them the more employable you are. AI has no learning curve, not at the current level of complexity anyway. So anyone can pick it up in 5 years and if you’ve used it less your brain is better.
I’m sceptical
The models seem to still (claude opus 4.5) not get things right, and miss edge cases, and work code in a way that’s not very structured.
I use them daily, but I often have to rewrite a lot to reshape the codebase to a point where it makes sense to use the model again.
I’m sure they’ll continue to get better, but out of a job better in 5 years? I’m not betting on it.
Ya, you have to shape your code base, not just that but get your AI to document your code base and come with some sort of pipeline to have different AI check things.
It’s fine to be skeptical, and I definitely hope I’m wrong, but it really is looking bad for SWEs who don’t start adopting at this point. It’s a bad bet in my opinion, at least have your F-u money built up in 5 if you aren’t going full in on it.
Why would you go full on ? There is no learning curve it seems like. What is there to learn about using AI to code?
Back in the early 2000s the sentiment was that IDEs were a force multiplier that was too high to ignore, and that anyone not using something akin to Visual Studio or Eclipse would be out of a job in 5 or so years. Meanwhile, 20 years later, the best programmers you know are still using Vim and Emacs.
As someone that uses vim full time all that happened is people started porting all the best features of IDEs over to vim/emacs as plugins. So those people were right it's just the features flowed.
Pretty sure you can count the number of professional programmers using vanilla vim/neovim on one hand.
It depends where you work. In gaming, the best programmers I know might not even touch the command-line / Linux, and their "life" depens on Visual Studio... Why? Because the eco-system around Visual Studio / Windows and how game console devkits work is pretty much tied - while Playstation is some kind of BSD, and maybe Nintendo - all their proper SDKs are just for Windows and tied around Visual Studio (there are some studios that are the exceptions, but rare).
I'm sure other industries would have their similar examples. And then the best folks in my direct team (infra), much smaller - are the command-line, Linux/docker/etc. guys that use mostly VSCode.
But the vast majority are still using an IDE - and I say this as someone who has adamantly used Vim with plugins for decades.
Something similar will happen with agentic workflows - those who aren't already productive with the status quo will have to eventually adopt productivity enhancing tooling.
That said, it isn't too surprising if the rate of AI adoption starts slowing down around now - agentic tooling has been around for a couple years now, so it makes sense that some amount of vendor/tool rationalization is kicking in.
It remains to be seen whether these tools are actually a net enhancement to productivity, especially accounting for longer-term / bigger-picture effects -- maintainability, quality assurance, user support, liability concerns, etc.
If they do indeed provide a boost, it is clearly not very massive so far. Otherwise we'd see a huge increase in the software output of the industry: big tech would be churning out new products at a record rate, tons of startups would be reaching maturity at an insane clip in every imaginable industry, new FOSS projects would be appearing faster than ever, ditto with forks of existing projects.
Instead we're getting an overall erosion of software quality, and the vast majority of new startups appear to just be uninspired wrappers around LLMs.
I'm not necessarily talking about AI code agents or AI code review (workflows which I think are difficult for agentic workflows to really show a tangible PoV against humans, but I've seen some of my portfolio companies building promising capabilities that will come out of stealth soon), but various other enhancements such as better code and documentation search, documentation generation, automating low sev ticket triage, low sev customer support, etc.
In those workflows and cases where margins and dollar value provided is low, I've seen significant uptake of AI tooling where possible.
Even reaching this point was unimaginable 5 years ago, and is enough to show workflow and dollar value for teams.
To use another analogy, using StackOverflow or Googling was viewed derisively by neckbeards who constantly spammed RTFD back in the day, but now no developer can succeed without being able to be a proficient searcher. And a major value that IDEs provided in comparison to traditional editors was that kind of recommendation capability along with code quality/linting tooling.
Concentrating on abstract tasks where the ability to benchmark between human and artificial intelligence is difficult means concentrating on the trees while missing the forest.
I don't foresee codegen tools replacing experienced developers but I do absolutely see them reducing a lot of ancillary work that is associated with the developer lifecycle.
> I've seen significant uptake of AI tooling where possible.
Uptake is orthogonal to productivity gain. Especially when LLM uptake is literally being forced upon employees in many companies.
> I do absolutely see them reducing a lot of ancillary work that is associated with the developer lifecycle.
That may be true! But my point is they also create new overhead in the process, and the net outcome to overall productivity isn't clear.
Unpacking some of your examples a bit --
Better code and documentation search: this is indeed beneficial to productivity, but how is it an agentic workflow that requires individual developers to adopt and become productive with, relative to the previous status quo?
Documentation generation: between the awful writing style and the lack of trustworthiness, personally I think these easily reduce overall productivity, when accounting for humans consuming the docs. Or in the case of AI consuming docs written by other AI, you end up with an ever-worsening cycle of slop.
Automating low sev ticket triage: Potentially beneficial, but we're not talking about a revolutionary leap in overall team/org/company productivity here.
Low sev customer support: Sounds like a good way to infuriate customers and harm the business.
I think no one can predict what will happen. We need to wait until we can empirically observe who will be more productive on certain tasks.
Thats why I started with AI coding. I wanted to hedge against the possibility that this takes off and I am useless. But it made me sad as hell and so I just said: Screw it. If this is the future, I will NOT participate.
That’s fine, but you don’t want to be blind sided by changes in the industry. If it’s not for you, have a plan B career lined up so you can still put food on the table. Also, if you are good at old fashioned SE and AI, you’ll be OK either way.
They'll be more employable, not less. Since they're the only ones who will be able to fix the huge mess left behind by the people relying on them.
[dead]
Don't think so.
There is nothing to learn, the entry barrier is zero. Any SWE can just start using it when they really need to.
Some of us will need time to learn to give less of a shit about quality.
Or you could learn how to do it the right way with quality intact. But it’s definitely your choice.
Adoption = number of users
Adoption rate = first derivative
Flattening adoption rate = the second derivative is negative
Starting to flatten = the third derivative is negative
I don't think anyone cares what the third derivative of something is when the first derivative could easily change by a macroscopic amount overnight.
Adoption rate is not derivative of Adoption. Rate of change is. Adoption rate is the percentage of uptake (there, same order with Adoption itself). It being flattening means first derivative is getting close to 0.
I agree, I think I misunderstood their wording.
In which case it's at least funny, but maybe subtract one from all my derivatives.. Which kills my point also. Dang.
It maps pretty cleanly to the well understood derivatives of a position vector. Position (user count), velocity (first derivative, change in user count over time), acceleration (second derivative, speeding up or flattening of the velocity), and jerk (third derivative, change in acceleration such as the shift between from acceleration to deceleration)
It really is a beautiful title.
It is a beautiful title and a beautiful way to think about it—alas, I think gp is right: here, from the charts anyway, the writer seems to mean the count of firms reporting adoption (as a proportion of total survey respondents).
Which paints a grimmer picture—I was surprised that they report a marked decline in adoption amongst firms of 250+ employees. That rate-as-first-derivative apparently turned negative months ago!
Then again, it’s awfully scant on context: does the absolute number of firms tell us much about how (or how productively) they’re using this tech? Maybe that’s for their deluxe investors.
It is not velocity, it is not change. Have you read the graphs? What do you think 12% in Aug and Sep for 250+ Employee companies mean, that another 12% of companies adopted AI or is it a flat "12% of the companies have adopted in Aug, and it did not change in Sep"
> Have you read the graphs?
Yes. The title specifically is beautiful. The charts aren't nearly as interesting, though probably a bit more than a meta discussion on whether certain time intervals align with one interpretation of the author's intent or another.
The function log(x) also has derivative that goes closer and closer to 0.
However lim x->inf log(x) is still inf.
Is it your assertion that an 'infinite' percentage! of the businesses will use AI on a long enough time scale?
If you need everything to be math, at least have the courtesy to use the https://en.wikipedia.org/wiki/Logistic_function and not unbounded logarithmic curves when referring to on our very finite world.
While there's an extreme amount of hype around AI, it seems there's an equal amount of demand for signs that it's a bubble or it's slowing down.
Well, that’s only because it exhibits all the signs of a bubble. It’s not exactly a grand conspiracy.
You could use that logic to dismiss any analysis of any trajectory ever.
Perfectly excusable post that says absolutely nothing about anything.
Yeah, what a jerk.
Hehehehehheeh
You win today.
I can't believe i was down voted for this silly comment on a third derivative pun. Get a life, techie.
> Adoption = number of users
> Adoption rate = first derivative
If you mean with respect to time, wrong. The denonimator in adoption rate that makes it a “rate” is the number of existing businesses, not time. It is adoption scaled to the universe of businesses, not the rate of change of adoption over time.
The adoption rate is the rate of adoption over time.
One could try to make an argument that "adoption rate" should mean change in adoption over time, but the meaning as used in this article is unambiguously not that. It's just percentages, not time derivatives, as clearly shown by the vertical axis labels.
There's another axis on the charts.
Normally, the adoption rate of something is the percentage ratio of adopters to non-adopters.
I don’t understand, how can adoption rate change overnight if its derivative is negative? Trying to draw a parallel to get intuition, if adoption is distance, adoption rate speed, and the derivative of adoption rate is acceleration, then if I was pedal to the floor but then release the pedal and start braking, I’ll not lose the distance gained (adoption) but my acceleration will flatten then get negative and my speed (adoption rate) will ultimately get to 0 right? Seems pretty significant for an industry built on 2030 projections.
One announcement from a company or government can suddenly change the derivative discontinuously.
Derivatives irl do not follow the rules of calculus that you learn in class because they don't have to be continuous. (you could quibble that if you zoom in enough it can be regarded as continuous.. But you don't gain anything from doing that, it really does behave discontinuous)
Person who draws comparison from current situation to derivatives points out that derivatives rules don't apply to current situation.
Awesome stuff.
I don't understand your point. It seemed like the person I was replying to didn't understand how both claims could be simultaneously true so I was elaborating.
Not sure what kinda calculus you took at least here in the states it's very standard to learn about such functions in class, and yes there is a difference between discontinuous and the slope being really large (though finite) for a brief period of time
You rarely study delta and step functions in an introductory calculus class. In this case the first derivative would be a step function, in the sense that over any finite interval it appears to be discontinuous. Since you can only sample a function in reality there's no distinguishing the discontinuous version from its smooth approximation.
(I suppose a rudimentary version of this is taught in intro calc. It's been a long time so I don't really remember.)
I'm sure it depends on who's teaching the class and what curriculum they follow, but we were doing piecewise linear functions well before differentiation so I think I do actually disagree as per your caveat. It's also possible that the courses triaged different material. As a calc for engineers not calc for math majors taker, my experience may have been heavier on deltas and steps.
Not to be all “do you know who X is,” but I did have to chuckle a little when I saw who it is that you’re teaching differentiation to here…
As seems to have sort of happened between March and April of this year, at least from the Ramp chart in TFA. I wonder what that was about.
Derivatives in actual calculus don’t have to be continuous either. Consider the function defined by f(x) = x^2 sin(1/x) for x != 0; f(0) = 0.
The derivative at 0 exists and is 0, because lim h-> 0 (h^2 sin(1/h))/h = lim h-> 0 (h sin(1/h)), which equals 0 because the sin function is bounded.
When x !=0, the derivative is given by the product and chain rules as 2x sin(1/x) - cos(1/x), which obviously approaches no limit as x-> 0, and so the derivative exists but is discontinuous.
Looking at the graphs in the linked article, a more accurate title would probably be "AI adoption has stagnated" - which a lot of people are going to care about.
Corporate AI adoption looks to be hitting a plateau, and adoption in large companies is even shrinking. The only market still showing growth is companies with fewer than 5 employees - and even there it's only linear growth.
Considering our economy is pumping billions into the AI industry, that's pretty bad news. If the industry isn't rapidly growing, why are they building all those data centers? Are they just setting money on fire in a desperate attempt to keep their share price from plummeting?
When all the dust settles, I think it's probably going to be the biggest bubble ever. The unjustified hype is unbelievable.
For some reason I can't even get Claude Code (Running GLM 4.6) to do the simplest of tasks today without feeling like I want to tear my hair out, whereas it used to be pretty good before.
They are all struggling mightily with the economics, and I suspect after each big announcement of a new improved model x.y.z where they demo shiny so called advancement, all the major AI companies heavily throttle their models in use to save a buck.
At this point I'm seriously considering biting the bullet and avoiding all use of AI for coding, except for research and exploring codebases.
First it was Bitcoin, and now this, careening from one hyper-bubble to a worse one.
I think it might be answering long-term questions about direct chat use of AIs. Of course as AI goes through its macroscopic changes the amount it gets used for each person will increase, however some will continue to avoid using AI directly, just like I don't fully use GPS navigation but I benefit from it whether I like it or not when others are transporting me or delivering things to me.
Not really. In this context adoption might be number of users. But adoption rate is a fraction of users that adopted this to all users.
Hm that's true. Both seem plausible in English. I didn't look closely enough to figure out which they meant.
Apollo published a similar chart in September 2025: https://www.apolloacademy.com/ai-adoption-rate-trending-down... - their headline for that one was "AI Adoption Rate Trending Down for Large Companies".
I had fun with that one getting GPT-5 and ChatGPT Code Interpreter to recreate it from a screenshot of the chart and some uploaded census data: https://simonwillison.net/2025/Sep/9/apollo-ai-adoption/
Then I repeated the same experiment with Claude Sonnet 4.5 after Anthropic released their own code interpreter style tool later on that same day: https://simonwillison.net/2025/Sep/9/claude-code-interpreter...
They show two different surveys that are supposed to show the same underlying truth but differ by a factor of 3x? For the Ramp survey: why the sudden jump from 30% to 50% in March? For the Census one: How could it possibly be that only 12% of companies with more than 250 people „adopted“ (whatever that means) AI? It would be interesting if it were true but these charts don’t make any sense at all to me
The Census Bureau asks if firms are using AI "to help produce goods or services". I guess that's intended to exclude not-yet-productive investigations, and maybe also indirect uses--does LLM-powered OCR for the expense reports for the travelling sales representatives for a widget factory count? That's all vague enough that I guess it works mostly as a sentiment check, where the absolute value isn't meaningful but the time trend might be.
The Ramp chart seems to use actual payment information from companies using their accounting platform. That should be more objective, though they don't disclose much about their methodology (and their customers aren't necessarily representative, the purpose and intensity of use aren't captured at all, etc.).
https://ramp.com/data/ai-index
> The Census Bureau asks if firms are using AI "to help produce goods or services"
That's odd. I use AI tools at work occasionally, but since our business involves selling physical goods, I guess we would not count as an AI adopter in this survey.
My guess is AI will find niches where it provides productivity boosts, but won’t be as useful in the majority of fields. Right now, AI works pretty well for coding, and doesn’t really excel anywhere else. It’s not looking like it will get good enough to disrupt the economy at large.
Aside from financially-motivated "testimonials," there's no broad evidence that it even works that well for coding, with many studies even showing the opposite. Damning with faint praise.
It depends on a lot of things.
I know JavaScript on a pretty surface level, but I can use Claude to wire up react and tailwind, and then my experience with all the other programming I’ve done gives me enough intuition to clean it up. That helps me turn rough things into usable tools that can be reused or deployed in small scale.
That’s a productivity increase for sure.
It has not helped me with the problems that I need to spend 2-5 days just thinking about and wrapping my head around solutions to. Even if it does come up with solutions that pass tests, they still need to be scrutinized and rewritten.
But the small tasks it’s good at add up to being worth the price tag for a subscription.
Do you feel like you begin to _really_ understand React and Tailwind? Major tools that you seem to use now.
Do you feel that you will become so well-versed in it that you will be able to debug weird edge cases in the future?
Will you be able to reason about performance? Develop deep intuition why pattern X doesn't work for React but pattern Y does. etc?
I personally learned for myself that this learning is not happening. My knowledge of tools that I used LLMs for stayed pretty superficial. I became dependent on the machine.
I think what’s clear is many people feel much more productive coding with LLMs, but perceived and actual productivity don’t necessarily correlate. I’m sure results vary quite a bit.
My hunch is that long term value might be quite low: a few years into vibe coding huge projects, developers might hit a wall with a mountain of slop code they can no longer manage or understand. There was an article here recently titled “vibe code is legacy code” which made a similar argument. Again, results surely vary wildly
It feels like it's creating economic activity in the tech sector the same way that walking down the street and smashing everyone's windshields would create economic activity for local auto shops.
Given the charts, that’s a ridiculous claim. Just compare early 2024 in the first chart, for example.
It’s way too early to decide whether it’s flattening out.
It's just printing headlines out of nothing. If it tried to answer why the two graphs show such different numbers (one ~14%, the other ~55%) I'd be more interested.
> Note: Data is six-survey moving average. The survey is conducted bi-weekly. Sources: US Census Bureau, Macrobond, Apollo Chief Economist
> Note: Ramp Al Index measures the adoption rate of artificial intelligence products and services among American businesses. The sample includes more than 40,000 American businesses and billions of dollars in corporate spend using data from Ramp’s corporate card and bill pay platform. Sources: Ramp, Bloomberg, Macrobond, Apollo Chief Economist
It seems that the real interesting thing to see here is that the companies using Ramp are extremely atypical.
Three consecutive months of decline starts to look more like a trend. Unless you think there's a transient issue causing the decline, something fundamental has changed
Again: compare early 2024. And that’s not the only thing; the second chart shows a possible flattening, but by no means certain yet, especially not when taken with the clear March–April jump; and the first chart shows no dwindling in 1–4, and clear recovery in 250+. The lie is easily put to the claim the article makes:
> Data from the Census Bureau and Ramp shows that AI adoption rates are starting to flatten out across all firm sizes, see charts below.
It’s flat-out nonsense, and anyone with any experience in this kind of statistics can see it.
Especially interesting is the adoption by the smallest companies. This means people find it still increasingly useful at the grassroot level where things are actually done.
At larger companies adoption will probably stop at the level where managers will start to be threatened.
But what does that grassroot adoption look like in practice? Is that a developer spending $250/month on Claude, or is it a local corner shop using it once a month to replace their clip art flyer with AI slop, and the example contract they previously found via Google with some legalese gobbledygook ChatGPT hallucinated?
Giving AI away for free to people who don't give a rat's ass about the quality of its output isn't very difficult. But that's not exactly going to pay your datacenter bill...
What is their definition of adoption? A company where every employee has some level of access to AI is the bare minimum of “full adoption” for a given company but a threadbare one.
A company that has implemented most current AI technologies in their applicable areas in known-functionally capabilities? That is a vastly larger definition of Full Adoption.
It's the different between access and full utilization. The gulf is massive. And I'm not aware of any major company, or really any, that have said, "yep, we're done, we're doing everything we think we can with AI and we're not going to try to improve upon it."
Implementation of acquired capabilities, implementations... Very early days. And it appears this study's definition is more like user access, not completed implementations. Somewhat annoyingly, I receive 3 or 4 calls a day, sometimes on weekends, from contracting firms looking for leads, TPMs, ML/Data scientists with genai / workflow experience. 3 months ago, without having done anything to put my name out any more that however it had been found before that, I was only getting 1 ever day or two.
I don't think this study is using a useful definition for what they intend to measure. It is certainly not capturing more than a fraction of activity.
Does it really matter? In this case, it is the perception that matters. If companies feel that AI is not quite as helpful as they thought it might, even if they have not maxed out what they theoretically could do with it, then that is all that matter in trying to get sense of where this might go
If I was openAI or whatever I would be investing in circular partnerships with claude or whatever, claim agentic use should be considered the same as real users, then have each other's LLM systems use each other and finally achieve infinite, uncapped user growth
Why would they not define what adoption rate mean? And why is “Ramp AI adoption rates” 3-4x just “AI adoption rates”?
From the chart, the percentage of companies using AI has been going down over the past couple of months
That's a massive deal because the AI companies today are valued on the assumption that they'll 10x their revenue over the next couple of years. If their revenue growth starts to slow down, their valuations will change to reflect that
This bubble phase will play out just as the previous have in tech: consolidation, most of the value creation will go to a small group of companies. Most will die, some will thrive.
Companies like Anthropic will not survive as an independent. They won't come close to having enough revenue & profit to sustain their operating costs (they're Lyft to Google or OpenAI's Uber, Anthropic will never reach the scale needed to roll over to significant profit generation). Its fair value is 1/10th or less what it's being valued at currently (yes because I say so). Anthropic's valuation will implode to reconcile that, as the market for AI does. Some larger company will scoop them up during the pain phase, once they get desperate enough to sell. When the implosion of the speculative hype is done, the real value creation will begin thereafter. Over the following two or three decades a radical amount of value will be generated by AI collectively, far beyond anything seen during this hype phase. A lot of lesser AI companies will follow the same path as Anthropic.
No no, we just need to put even more money in.
The least volatile dataset, employee count 1-4 businesses, is steadily climbing in adoption. I feel like as long as the smallest businesses (so the most agile, non-enterprise software ones) increase in adoption, other sizes will follow.
Not to be lost, but the first chart is actually a 3-month moving average. Surprised they buried that in the notes and didn't simply include it in the chart title. "Note: Data is six-survey moving average. The survey is conducted bi-weekly. Sources: US Census Bureau, Macrobond, Apollo Chief Economist"
Without weighing in on the accuracy of this claim, this would be an expected part of the maturity cycle.
Compare to databases. You could probably have plotted a chart of database adoption rates in the '90s as small companies started running e.g. Lotus Notes, FoxPro and SQL server everywhere to build in-house CRMs and back-office apps. Those companies still operate those functions, but now most small businesses do not run databases themselves. Why manage SQL Server when you can just pay for Salesforce and Notion with predictable monthly spend?
(All of this is more complex, but analogous at larger companies.)
My take is the big rise in AI adoption, if it arrives, will similarly be embedded inside application functions.
People push back against comments like these. But, as you suggest, the win isn't about individual developers potentially increasing their productivity by some inflated amount. It's about baking more prediction and automation into more tools that people who aren't developers use. Which is probably part of where the general meme of lack of interest in entry level programmers come from.
Actually surprising when programmers (especially) push back. A couple years ago, people were doing copy/paste from ChatGPT to their IDEs. Now, they generally work at a higher level of abstraction in dedicated tools like Coded or Cursor. Why would other functions prefer the copy/paste lifestyle?
I think what is happening is that people are realizing AI is not just plug and play. It can do amazing things but needs engineering around it.
I think what will happen is in parallel more products will be built that address the engineering challenges and the models will keep getting better. I don't know though if that will lead to another hockey stick or just slow and steady.
I'm more interested in what the implications are for the economy and what this next AI winter looks like.
What happens to all the debt? Was all this just for chatbots that are finally barely good enough for satnav and image gen that does slightly better photoshop that the layperson can use?
What a shitty plot. Here are the sins I count:
1. No y axis label.
2. It supposedly plots a “rate”, but the time interval is unspecified. Per second? Per month? Per year? Intuitively my best guess is that the rate is per-year. However that would imply the second plot believes we are very near to 100% adoption, which I think we know is false. So what is this? Some esoteric time interval like bi-yearly?
3. More likely, it is not a rate at all, but instead a plot of total adoption. In this case, the title is chosen _very_ poorly. The author of the plot probably doesn’t know what they’re looking at.
4. Without grid lines, it’s very hard to read the data in the middle of the plot.
There is no need for a 'rate' to be against a time interval. The conversion rate of an email is purchases / emails sent. The fatality rate of a disease is casualties / people infected. It really just means a ratio.
Ok, but in this case, a ration between what and what?
The number of businesses that adopted AI versus the number of businesses.
The average person has no idea what to use AI for to get substantial value out of what it can now do.
It's the switch between: know which service to use, consider capabilities, try to get AI to do a thing, if you even have a thing that needs done that it can do; versus: AI just does a thing for you, requiring little to no thought. Very active vs very passive. Use will go up in direct relation to that changeover. The super users are already at peak, they're fully engaged. A software developer wants a very active relationship with their AI; Joe Average does not.
The complexity has to vanish entirely. It's the difference between hiding the extraordinary engineering that is Google search behind a simple input box, and making users select a hundred settings before firing off a search. Imagine if the average search user needed to know something meaningful about the capabilities of Google search or search in general, before using it. Prime Google search (~1998-2016) obliterated the competition (including the portals) with that one simple search box, by shifting all the complexity to the back-end; they made it so simple the user really couldn't screw anything up. That's also why ChatGPT got so far so fast: input box, type something, complexity mostly hidden.
so no expot. growth? who would have guess?
/s