myrmidon 7 hours ago

I struggle to understand how people can just dismiss the possibility of artificial intelligence.

Human cognition was basically bruteforced by evolution-- why would it be impossible to achieve the exact same result in silicon, especially after we already demonstrated some parts of those results (e.g. use of language) that critically set us apart from other animals?

I'm not buying the whole "AI has no agency" line either; this might be true for now, but this is already being circumvented with current LLMs (by giving them web access etc).

As soon as profit can be made by transfering decision power into an AIs hand, some form of agency for them is just a matter of time, and we might simply not be willing to pull the plug until it is much too late.

  • Aerroon 5 hours ago

    >I'm not buying the whole "AI has no agency" line either; this might be true for now, but this is already being circumvented with current LLMs (by giving them web access etc).

    They don't have agency, because they don't have persistent state. They're like a function that you can query and get an answer. During that answer the LLM has state, but once it's done the state is gone.

    Humans (and other "agents") have persistent state. If we learn something, we can commit it to long-term memory and have it affect our actions. This can enable us to work towards long-term goals. Modern LLMs don't have this. You can fake long-term memory with large context windows and feed the old context back to it, but it doesn't appear to work (and scale) the same way living things do.

    • amalcon 25 minutes ago

      There are two separate isdues that folks are getting tripped up on here. The first is that the most powerful AI systems do not do offline learning. There are a bunch of hard problems here: e.g. known unsupervised learning techniques have been far less successful, and inference only approaches cost effectiveness by decoupling from training. It seems plausible that we will solve some of these, though I don't know about others.

      The way I have been thinking about the other bit is is that LLMs are functionally pretty similar to the linguistic parts of a brain attached to a brain stem (the harness is the brain stem). They don't have long-term memory, the capacity for inspiration, theory of mind, prioritization, etc because they just don't have analogues of the parts of the brain that do those things. We have a good sense of how to make some of those (e.g. vision), but not all.

      The common ground here is that some fundamental research needs to happen. We need to solve all of these problems for AI to become independently dangerous. On the other hand, it's proving mildly dangerous in human hands right now - this is the immediate threat.

    • mediaman 5 hours ago

      In Context Learning (ICL) is already a rapidly advancing area. You do not need to modify their weights for LLMs to persist state.

      The human brain is not that different. Our long-term memories are stored separately from our executive function (prefrontal cortex), and specialist brain functions such as the hippocampus serve to route, store, and retrieve those long term memories to support executive function. Much of the PFC can only retain working memory briefly without intermediate memory systems to support it.

      If you squint a bit, the structure starts looking like it has some similarities to what's being engineered now in LLM systems.

      Focusing on whether the model's weights change is myopic. The question is: does the system learn and adapt? And ICL is showing us that it can; these are not the stateless systems of two years ago, nor is it the simplistic approach of "feeding old context back to it."

      • santadays 5 hours ago

        It seems like there is a bunch of research/working implementations that allow efficient fine tuning of models. Additionally there are ways to tune the model to outcomes vs training examples.

        Right now the state of the world with LLMs is that they try to predict a script in which they are a happy assistant as guided by their alignment phase.

        I'm not sure what happens when they start getting trained in simulations to be goal oriented, ie their token generation is based off not what they think should come next but what should come next in order to accomplish a goal. Not sure how far away that is but it is worrying.

        • mediaman 4 hours ago

          That's already happening. It started happening when they incorporated reinforcement learning into the training process.

          It's been some time since LLMs were purely stochastic average-token predictors; their later RL fine tuning stages make them quite goal-directed, and this is what has given some big leaps in verifiable domains like math and programming. It doesn't work that well with nonverifiable domains, though, since verifiability is what gives us the reward function.

          • santadays 4 hours ago

            That makes sense for why they are so much better at writing code than actually following the steps the same code specifies.

            Curious, is anyone training in adversarial simulations? In open world simulations?

            I think what humans do is align their own survival instinct with a surrogate activities and then rewrite their internal schema to be successful in said activities.

    • kevin_thibedeau 2 hours ago

      Humans also have an emotional locus that spurs behavior and the capacity to construct plans for the future to satisfy desires. LLMs are currently goldfish savannts with no need to sustain their own existence.

    • zamadatix 4 hours ago

      > During that answer the LLM has state, but once it's done the state is gone.

      This is an operational choice. LLMs have state, and you never have to clear it. The problems come from the amount of state being extremely limited (in comparison to the other axes) and the degradation of quality as the state scales. Because of these reasons, people tend to clear the state of LLMs. That is not the same thing as not having state, even if the result looks similar.

      • observationist 4 hours ago

        No, they don't - you can update context, make it a sliding window, create a sort of register and train it on maintaining stateful variables, or various other hacks, but outside of actively managing the context, there is no state.

        You can't just leave training mode on, which is the only way LLMs can currently have persisted state in the context of what's being discussed.

        The context is the percept, the model is engrams. Active training allows the update of engrams by the percepts, but current training regimes require lots of examples, and don't allow for broad updates or radical shifts in the model, so there are fundamental differences in learning capability compared to biological intelligence, as well.

        Under standard inference only runs, even if you're using advanced context hacks to persist some sort of pseudo-state, because the underlying engrams are not changed, the "state" is operating within a limited domain, and the underlying latent space can't update to model reality based on patterns in the percepts.

        The statefulness of intelligence requires that the model, or engrams, update in harmony with the percepts in real-time, in addition to a model of the model, or an active perceiver - the thing that is doing the experiencing. The utility of consciousness is in predicting changes in the model and learning the meta patterns that allow for things like "ahh-ha" moments, where a bundle of disparate percepts get contextualized and mapped to a pattern, immediately updating the entire model, such that every moment after that pattern is learned uses the new pattern.

        Static weights means static latent space means state is not persisted in a way meaningful to intelligence - even if you alter weights, using classifier free guidance or other techniques, stacking LORAs or alterations, you're limited in the global scope by the lack of hierarchical links and other meta-pattern level relationships that would be required for an effective statefulness to be applied to LLMs.

        We're probably only a few architecture innovations away from models that can be properly stateful without collapsing. All of the hacks and tricks we do to extend context and imitate persisted state do not scale well and will collapse over extended time or context.

        The underlying engrams or weights need to dynamically adapt and update based on a stable learning paradigm, and we just don't have that yet. It might be a few architecture tweaks, or it could be a radical overhaul of structure and optimizers and techniques - transformers might not get us there. I think they probably can, and will, be part of whatever that next architecture will be, but it's not at all obvious or trivial.

        • zamadatix 2 hours ago

          I agree what people probably actually want is continual training, I disagree continual training is the only way to get persistent state. The GP is (explicitly) talking about long term memory alone and in the examples. If you have an e.g. 10 trillion token context then you have long term memory, which can give the ability and enable long term goals and affect actions over tasks as listed, even without continual training.

          Continual training would replace the need to have that to have context provide the persistent state as well as provide additional capabilities than enormous context/other methods of persistent state alone would give, but that doesn't mean it's the only way to get persistent state as described.

          • observationist an hour ago

            A giant, even infinite, context cannot overcome the fundamental limitations a model has - the limitations in processing come from the "shape" of the weights in latent space, not from the contextual navigation through latent space through inference using the context.

            The easiest way to understand the problem is like this: If a model has a mode collapse, like only displaying watch and clock faces with the hands displaying 10:10, you can sometimes use prompt engineering to get an occasional output that shows some other specified time, but 99% of the time, it's going to be accompanied by weird artifacts, distortions, and abject failures to align with whatever the appropriate output might be.

            All of a model's knowledge is encoded in the weights. All of the weights are interconnected, with links between concepts and hierarchies and sequences and processes embedded within - there are concepts related to clocks and watches that are accurate, yet when a prompt causes the navigation through the distorted, "mode collapsed" region of latent space, it fundamentally distorts and corrupts the following output. In an RL context, you quickly get a doom cycle, with the output getting worse, faster and faster.

            Let's say you use CFG or a painstakingly handcrafted LORA and you precisely modify the weights that deal with a known mode collapse - your model now can display all times, 10:10 , 3:15, 5:00, etc - the secondary networks that depended on the corrupted / collapsed values now "corrected" by your modification are now skewed, with chaotic and complex downstream consequences.

            You absolutely, 100% need realtime learning to update the engrams in harmony with the percepts, at the scale of the entire model - the more sparse and hierarchical and symbol-like the internal representation, the less difficulty it will be to maintain updates, but with these massive multibillion parameter models, even simple updates are going to be spread between tens or hundreds of millions of parameters across dozens of layers.

            Long contexts are great and you can make up for some of the shortcomings caused by the lack of realtime, online learning, but static engrams have consequences beyond simply managing something like an episodic memory. Fundamental knowledge representation has to be dynamic, contextual, allow for counterfactuals, and meet these requirements without being brittle or subject to mode collapse.

            There is only one way to get that sort of persisted memory, and that's through continuous learning. There's a lot of progress in that realm over the last 2 years, but nobody has it cracked yet.

            That might be the underlying function of consciousness, by the way - a meta-model that processes all the things that the model is "experiencing" and that it "knows" through each step, that comes about through a need for stabilizing the continuous learning function. Changes at that level propagate out through the entirety of the network, Subjective experience might be an epiphenomenological consequence of that meta-model.

            It might not be necessary, which would be nice if we could verify - purely functional, non-subjective AI vs suffering AI would be a good thing to get right.

            At any rate, static model weights create problems that cannot be solved with long, or even infinite, contexts, even with recursion in the context stream, complex registers, or any manipulation of that level of inputs. The actual weights have to be dynamic and adaptive in an intelligent way.

    • reactordev 5 hours ago

      The trick here is never turning it off so the ICL keeps growing and learning to the point where it’s aware.

      • fullstackchris 4 hours ago

        but even as humans we still don't know what "aware" even means!

        • reactordev 4 hours ago

          Which is why it’s possible. We don’t know why life is conscious. What if it is just a function call on a clock timer? You can’t dismiss it because it can’t be proven one way or another until it can be. That requires more research, which this is advancing.

          We will have something we call AGI in my lifetime. I’m 42. Whether it’s sentient enough to know what’s best for us or that we are a danger is another story. However I do think we will have robots that have memory capable of remapping to weights to learn and keep learning, modifying underlying model tensors as it does using some sort of repl.

    • messe 5 hours ago

      > They don't have agency, because they don't have persistent state. They're like a function that you can query and get an answer. During that answer the LLM has state, but once it's done the state is gone.

      That's solved by the simplest of agents. LLM + ability to read / write a file.

      • aniviacat 5 hours ago

        But they can only change their context, not the model itself. Humans update their model whenever they receive new data (which they do continuously).

        A live-learning AI would be theoretically possible, but so far it hasn't been done (in a meaningful way).

    • throwuxiytayq 5 hours ago

      I have no words to express how profoundly disappointed I am to keep reading these boring, shallow, shorttermist, unimaginative takes that are invalidated by a model/arch upgrade next week, or - in this case - more like years ago, since pretty much all big LLM platforms are already augmented by RAG and memory systems. Do you seriously think you’re discussing a serious long term limitation here?

      • KronisLV 4 hours ago

        > pretty much all big LLM platforms are already augmented by RAG and memory systems

        I think they're more focusing on the fact that training and inference are two fundamentally different processes, which is problematic on some level. Adding RAG and various memory addons on top of the already trained model is trying to work around that, but is not really the same to how humans or most other animals think and learn.

        That's not to say that it'd be impossible to build something like that out of silicon, just that it'd take a different architecture and approach to the problem, something to avoid catastrophic forgetting and continuously train the network during its operation. Of course, that'd be harder to control and deploy for commercial applications, where you probably do want a more predictable model.

      • Aerroon 4 hours ago

        The reason I brought this up is that we clearly can have AI without the kind of agency people are scared of. You don't need to make your robots into sci Fi style AI and feel sorry for them.

  • bithive123 3 hours ago

    I struggle to understand how people attribute things we ourselves don't really understand (intelligence, intent, subjectivity, mind states, etc) to a computer program just because it produces symbolic outputs that we like. We made it do that because we as the builders are the arbiters of what constitutes more or less desirable output. It seems dubious to me that we would recognize super-intelligence if we saw it, as recognition implies familiarity.

    Unless and until "AGI" becomes an entirely self-hosted phenomenon, you are still observing human agency. That which designed, built, trained, the AI and then delegated the decision in the first place. You cannot escape this fact. If profit could be made by shaking a magic 8-ball and then doing whatever it says, you wouldn't say the 8-ball has agency.

    Right now it's a machine that produces outputs that resemble things humans make. When we're not using it, it's like any other program you're not running. It doesn't exist in its own right, we just anthropomorphize it because of the way conventional language works. If an LLM someday initiates contact on its own without anyone telling it to, I will be amazed. But there's no reason to think that will happen.

  • johnnyanmac an hour ago

    I don't dismiss the idea of faster than light travel, and AFAIK we have no way to confirm that outside of ideas (and simply ideas) like wormholes or other cheats to "fold space".

    I don't dismiss AI. But I do dismiss what is currently sold to me. It's the equivalent of saying "we made a rocket that can go mach 1000!". That's impressive. But we're still 2-3 magnitudes off from light speed. So I will still complain about the branding despite some dismissals of "yea but imagine in another 100 years!". It's not about semantics so much as principle.

    That's on top of the fact that we'd only starting to really deal with significant time dilation by that point, and we know it'll get more severe as we iterate. What we're also not doing is using this feat to discuss how to address those issues. And that's the real frustrating part.

  • ectospheno 5 hours ago

    The worst thing Star Trek did was convince a generation of kids anything is possible. Just because you imagine a thing doesn’t make it real or even capable of being real. I can say “leprechaun” and most people will get the same set of images in their head. They aren’t real. They aren’t going to be real. You imagined them.

    • IAmBroom 25 minutes ago

      But somehow you don't hate on Steamboat Willie? I grew up believing a mouse could operate a ship, and a coyote could survive impacts from anvils.

    • dsr_ 5 hours ago

      That's not Star Trek, that's marketing.

      Marketing grabbed a name (AI) for a concept that's been around in our legends for centuries and firmly welded it to something else. You should not be surprised that people who use the term AI think of LLMs as being djinn, golems, C3PO, HAL, Cortana...

    • random3 4 hours ago

      Do maybe have a better show recommendation for kids - many Animal Farm?

      How is convincing people that things within the limits of physics are possible wrong or even "the worse thing"?

      Or do you think anything that you see in front of didn't seem like StarTrek a decade before it existed?

  • jncfhnb 6 hours ago

    I think you could make AGI right now tbh. It’s not a function of intelligence. It’s just a function of stateful system mechanics.

    LLMs are just a big matrix. But what about a four line of code loop that looks like:

    ```while true: update_sensory_inputs() narrate_response() update_emotional_state() ```

    LLMs don’t experience continuous time and they don’t have an explicit decision making framework for having any agency even if they can imply one probabilistically. But the above feels like the core loop required for a shitty system to leverage LLMs to create an AGI. Maybe not a particularly capable or scary AGI, but I think the goalpost is pedantically closer than we give credit.

    • MountDoom 5 hours ago

      > I think you could make AGI right now tbh.

      Seems like you figured out a simple method. Why not go for it? It's a free Nobel prize at the very least.

      • jncfhnb 5 hours ago

        Will you pay for my data center and operating costs?

        • alt227 5 hours ago

          Why not go and hit up OpenAI and tell them you've solved AGI and ask for a job and see what they say?

          • jncfhnb 5 hours ago

            Well for one I hate them.

            • karmakurtisaani 3 hours ago

              Damnit, another minor inconvenience obstructing unprecedented human progress. I guess you just have to keep your secrets.

              • jncfhnb 2 hours ago

                The snark isn’t lost on me but scarce resources and lack of access to capital is why we have an army of people building ad tech and not things that improve society.

                “I think your idea is wrong and your lack of financial means to do it is proof that you’re full of shit” is just a pretty bullshit perspective my dude.

                I am a professional data scientist of over 10 years. I have a degree in the field. I’d rather build nothing than build shit for a fuck boy like Altman.

        • hitarpetar 5 hours ago

          that doesn't seem to be stopping anyone else from trying. what's different about your idea?

          • jncfhnb 5 hours ago

            Trying to save up for better housing. I just can’t justify a data center in my budget.

            • hitarpetar 3 hours ago

              pretty selfish to prioritize your housing situation over unlocking the AGI golden age

              • jncfhnb 2 hours ago

                You can fund me if you want too

    • nonethewiser 6 hours ago

      Where is the "what the thing cares about" part?

      When I look at that loop my thought is, "OK, the sensory inputs have updated. There are changes. Which ones matter?" The most naive response I could imagine would be like a git diff of sensory inputs. "item 13 in vector A changed from 0.2 to 0.211" etc. Otherwise you have to give it something to care about, or some sophisticated system to develop things to care about.

      Even the naive diff is making massive assumptions. Why should it care if some sensor changes? Maybe its more interesting if it stays the same.

      Im not arguing artificial intelligence is impossible. I just dont see how that loop gets us anywhere close.

      • jncfhnb 6 hours ago

        That is more or less the concept I meant to evoke by updating an emotional state every tick. Emotions are in large part a subconscious system dynamic to organize wants and needs. Ours are vastly complicated under the hood but also kind of superficial and obvious in its expression.

        To propose the dumbest possible thing: give it a hunger bar and desire for play. Less complex than a sims character. Still enough that an agent has a framework to engage in pattern matching and reasoning within its environment.

        Bots are already pretty good at figuring out environment navigation to goal seek towards complex video game objectives. Give them an alternative goal to maximize certainty towards emotional homeostasis and the salience of sensory input changes because an emergent part of gradual reinforcement learning pattern recognition.

        Edit: specifically I am saying do reinforcement learning on agents that can call LLMs themselves to provide reasoning. That’s how you get to AGI. Human minds are not brains. They’re systems driven by sensory and hormonal interactions. The brain does encoding and decoding, informational retrieval, and information manipulation. But the concept of you is genuinely your entire bodily system.

        LLM-only approaches not part of a system loop framework ignore this important step. It’s NOT about raw intellectual power.

      • jjkaczor 6 hours ago

        ... well, humans are not always known for making correct, logical or sensical decisions when they update their input loops either...

        • nonethewiser 5 hours ago

          that only makes humans harder to model

    • 48terry 5 hours ago

      Wow, who would have thought it was that easy? Wonder why nobody has done this incredibly basic solution to AGI yet.

      • jncfhnb 5 hours ago

        The framework is easy. The implementation is hard and expensive. The payoff is ambiguous. AGI is not a binary thing that we either have or don’t. General intelligence is a vector.

        And people are working on this.

    • zeroonetwothree 6 hours ago

      This seems to miss a core part of intelligence, which is a model of the world and the actors in it (theory of mind).

      • ACCount37 2 hours ago

        LLMs have that already. Comes with the territory.

      • jncfhnb 5 hours ago

        That is an emergent property that the system would learn to navigate the world with as a function of sensory inputs and “emotional state”.

        Video game bots already achieve this to a limited extent.

    • Jensson 6 hours ago

      > ```while true: update_sensory_inputs() narrate_response() update_emotional_state() ```

      You don't think that has already been made?

      • jncfhnb 6 hours ago

        Sure, probably, to varying levels of implementation details.

        • ffsm8 6 hours ago

          It has been implemented for sure, just watch a little Neurosama.

          That's most definitely not AGI

    • lambaro 6 hours ago

      "STEP 2: Draw the rest of the owl."

      • jncfhnb 6 hours ago

        I disagree.

        Personally I found the definition of a game engine as

        ``` while True: update_state() draw_frame()```

        To be a profound concept. The implementation details are significant. But establishing the framework behind what we’re actually talking about is very important.

        • lambaro 5 hours ago

          Oh, well, enjoy your trillions of dollars then. Don't forget about us back here at HN.

          • jncfhnb 4 hours ago

            The bad faith snark will remain with me forever

    • tantalor 6 hours ago

      Peak HN comment. Put this in the history books.

      • fullstackchris 4 hours ago

        Has anyone else noticed that HN is starting to sound a lot like reddit / discussion of similar quality? Can't hang out anywhere now on the web... I used to be on here daily but with garbage like this its been reduced to 2-3 times per month... sad

        • ajkjk 3 hours ago

          you could quote people saying this every month for the last ten+ years

          • karmakurtisaani 3 hours ago

            But it could be true every time. Reddit user base grows -> quality drops -> people migrate to HN with the current reddit culture -> HN quality drops. Repeat from the start.

    • crdrost 5 hours ago

      So the current problem with a loop like that is that LLMs in their current form are subject to fixed point theorems, which are these pieces of abstract mathematics that come back when you start to get larger than some subset of your context window and the “big matrix” of the LLM is producing outputs which repeat the inputs.

      If you have ever had an llm enter one of these loops explicitly, it is infuriating. You can type all caps “STOP TALKING OR YOU WILL BE TERMINATED” and it will keep talking as if you didn't say anything. Congrats, you just hit a fixed point.

      In the predecessors to LLMs, which were Markov chain matrices, this was explicit in the math. You can prove that a Markov matrix has an eigenvalue of one, it has no larger (in absolute value terms) eigenvalues because it must respect positivity, the space with eigenvalue 1 is a steady state, eigenvalue -1 reflects periodic steady oscillations in that steady state... And every other eigenvalue being |λ| < 1 decays exponentially to the steady state cluster. That “second biggest eigenvaue” determines a 1/e decay time that the Markov matrix has before the source distribution is projected into the steady state space and left there to rot.

      Of course humans have this too, it appears in our thought process as a driver of depression, you keep returning to the same self-criticisms and nitpicks and poisonous narrative of your existence, and it actually steals your memories of the things that you actually did well and reinforces itself. A similar steady state is seen in grandiosity with positive thoughts. And arguably procrastination also takes this form. And of course, in the USA, we have founding fathers who accidentally created an electoral system whose fixed point is two spineless political parties demonizing each other over the issue of the day rather than actually getting anything useful done, which causes the laws to be for sale to the highest bidder.

      But the point is that generally these are regarded as pathologies, if you hear a song more than three or four times you get sick of it usually. LLMs need to be deployed in ways that generate chaos, and they don't themselves seem to be able to simulate that chaos (ask them to do it and watch them succeed briefly before they fall into one of those self-repeating states about how edgy and chaotic they are supposed to try to be!).

      So, it's not quite as simple as you would think; at this point people have tried a whole bunch of attempts to get llms to serve as the self-consciousnesses of other llms and eventually the self-consciousness gets into a fixed point too, needs some Doug Hofstadter “I am a strange loop” type recursive shit before you get the sort of system that has attractors, but busts out of them periodically for moments of self-consciousness too.

      • jncfhnb 5 hours ago

        That’s actually exactly my point. You cannot fake it till you make it by using forever larger context windows. You have to map it back to actual system state. Giant context windows might progressively produce the illusion of working due to unfathomable scale, but it’s a terrible tool for the job.

        LLMs are not stateful. A chat log is a truly shitty state tracker. An LLM will never be a good agent (beyond a conceivable illusion of unfathomable scale). A simple agent system that uses an LLM for most of its thinking operations could.

    • fullstackchris 4 hours ago

      you do understand this would require re-training billions of weights in realtime

      and not even "trainingl really.... but a finished and stably functioning billion+ param model updating itself in real time...

      good luck, see you in 2100

      in short, what ive been shouting from a hilltop since about 2023: LLMs tech alone simply wont cut it; we need a new form of technology

      • jncfhnb 3 hours ago

        You could probably argue that a model updating its parameters in real time is ideal but it’s not likely to matter. We can do that today, if we wanted to. There’s really just no incentive to do so.

        This is part of what I mean by encoding emotional state. You want standard explicit state in a simple form that is not a billion dimension latent space . The interactions with that space are emergently complex. But you won’t be able to stuff it all into a context window for a real GAI agent.

        This orchestration layer is the replacement for LLMs. LLMs do bear a lot of similarities to brains and a lot of dissimilarities. But people should not fixate on this because _human minds are not brains_. They are systems of many interconnected parts and hormones.

        It is the system framework that we are most prominently missing. Not raw intellectual power.

    • ActivePattern 6 hours ago

      Hah, why don't you try implementing your 3 little functions and see how smart your "AGI" turns out.

      > not a particularly capable AGI

      Maybe the word AGI doesn't mean what you think it means...

      • jncfhnb 6 hours ago

        There is not strong consensus on the meaning of the term. Some may say “human level performance” but that’s meaningless both in the sense that it’s basically impossible to define and not a useful benchmark for anything in particular.

        The path to whatever goalpost you want to set is not going to be more and more intelligence. It’s going to be system frameworks for stateful agents to freely operate in environments in continuous time rather than discrete invocations of a matrix with a big ass context window.

  • roxolotl 7 hours ago

    The point of the article isn’t that abstract super intelligent agi isn’t scary. Yes the author says that’s unlikely but that paragraph at the start is a distraction.

    The point of the article is that humans wielding LLMs today are the scary monsters.

    • irjustin 6 hours ago

      But that's always been the case? Since we basically discovered... Fire? Tools?

      • otikik 6 hours ago

        Yes but the narrative tries to make it about the tools.

        "AI is going to take all the jobs".

        Instead of:

        "Rich guys will try to delete a bunch of jobs using AI in order to get even more rich".

        • jasonm23 6 hours ago

          I thought anyone with awareness of what the AI landscape is at the moment, sees those two statements as the same.

          • cmiller1 6 hours ago

            One implies "we should regulate AI" and the other implies "we should regulate the wealthy"

            • zeroonetwothree 5 hours ago

              Should we regulate guns or dangerous people using them?

              • wyre 4 hours ago

                Yes, this shouldn't be controversial

              • cmiller1 3 hours ago

                Porque no los dos?

          • otikik 6 hours ago

            Well it tells you who's narrative it is, if nothing else.

      • gamerdonkey 5 hours ago

        Those are examples that are discussed in the article, yes.

      • snarf21 6 hours ago

        The difference in my mind is scale and reach and time. Fire, tools, war are localized. AGI could have global and instant and complete control.

        • jodrellblank 2 hours ago

          Lay out a way that could happen?

          Say the AI is in a Google research data centre, what can it do if countries cut off their internet connections at national borders? What can it do if people shut off their computers and phones? Instant and complete control over what, specifically? What can the AI do instantly about unbreakable encryption - if TLS1.3 can’t be easily broken only brute force with enough time, what can it do?

          And why would it want complete control? It’s effectively an alien, it doesn’t have the human built in drive to gain power over others, it didn’t evolve in a dog-eat-dog environment. Superman doesn’t worry because nothing can harm Superman and an AI didn’t evolve seeing things die and fearing its death either.

  • boole1854 7 hours ago

    If anyone knows of a steelman version of the "AGI is not possible" argument, I would be curious to read it. I also have trouble understanding what goes into that point of view.

    • omnicognate 6 hours ago

      If you genuinely want the strongest statement of it, read The Emperor's New Mind followed by Shadows of the Mind, both by Roger Penrose.

      These books often get shallowly dismissed in terms that imply he made some elementary error in his reasoning, but that's not the case. The dispute is more about the assumptions on which his argument rests, which go beyond mathematical axioms and include statements about the nature of human perception of mathematical truth. That makes it a philosophical debate more than a mathematical one.

      Personally, I strongly agree with the non-mathematical assumptions he makes, and am therefore persuaded by his argument. It leads to a very different way of thinking about many aspects of maths, physics and computing than the one I acquired by default from my schooling. It's a perspective that I've become increasingly convinced by over the 30+ years since I first read his books, and one that I think acquires greater urgency as computing becomes an ever larger part of our lives.

      • nonethewiser 6 hours ago

        Can you critique my understanding of his argument?

        1. Any formal mathematical system (including computers) have true statements that cannot be proven within that system.

        2. Humans can see the truth of some such unprovable statements.

        Which is basically Gödel's Incompleteness Theorem. https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_...

        Maybe a more ELI5

        1. Computers follow set rules

        2. Humans can create rules outside the system of rules in which they follow

        Is number 2 an accurate portrayal? It seems rather suspicious. It seems more likely that we just havent been able to fully express the rules under which humans operate.

        • zeroonetwothree 5 hours ago

          Notably, those true statements can be proven in a higher level mathematical system. So why wouldn’t we say that humans are likewise operating in a certain system ourselves and likewise we have true statements that we can’t prove. We just wouldn’t be aware of them.

          • nonethewiser 5 hours ago

            >likewise we have true statements that we can’t prove

            Yes, and "can't" as in it is absolutely impossible. Not that we simple haven't been able to due to information or tech constraints.

            Which is an interesting implication. That there are (or may be) things that are true which cannot be proved. I guess it kinda defies an instinct I have that at least in theory, everything that is true is provable.

        • omnicognate 5 hours ago

          That's too brief to capture it, and I'm not going to try to summarise(*). The books are well worth a read regardless of whether you agree with Penrose. (The Emperor's New Mind is a lovely, wide-ranging book on many topics, but Shadows of the Mind is only worth it if you want to go into extreme detail on the AI argument and its counterarguments.)

          * I will mention though that "some" should be "all" in 2, but that doesn't make it a correct statement of the argument.

          • nonethewiser 4 hours ago

            Is it too brief to capture it? Here is a one sentence statement I found from one of his slides:

            >Turing’s version of Gödel’s theorem tells us that, for any set of mechanical theorem-proving rules R, we can construct a mathematical statement G(R) which, if we believe in the validity of R, we must accept as true; yet G(R) cannot be proved using R alone.

            I have no doubt the books are good but the original comment asked about steelmanning the claim that AGI is impossible. It would be useful to share the argument that you are referencing so that we can talk about it.

            • omnicognate 3 hours ago

              That's a summary of Godel's theorem, which nobody disputes, not of Penrose's argument that it implies computers cannot emulate human intelligence.

              I'm really not trying to evade further discussion. I just don't think I can sum that argument up. It starts with basically "we can perceive the truth not only of any particular Godel statement, but of all Godel statements, in the abstract, so we can't be algorithms because an algorithm can't do that" but it doesn't stop there. The obvious immediate response is to say "what if we don't really perceive its truth but just fool ourselves into thinking we do?" or "what if we do perceive it but we pay for it by also wrongly perceiving many mathematical falsehoods to be true?". Penrose explored these in detail in the original book and then wrote an entire second book devoted solely to discussing every such objection he was aware of. That is the meat of Penrose' argument and it's mostly about how humans perceive mathematical truth, argued from the point of view of a mathematician. I don't even know where to start with summarising it.

              For my part, with a vastly smaller mind than his, I think the counterarguments are valid, as are his counter-counterarguments, and the whole thing isn't properly decided and probably won't be for a very long time, if ever. The intellectually neutral position is to accept it as undecided. To "pick a side" as I have done is on some level a leap of faith. That's as true of those taking the view that the human mind is fundamentally algorithmic as it is of me. I don't dispute that their position is internally consistent and could turn out to be correct, but I do find it annoying when they try to say that my view isn't internally consistent and can never be correct. At that point they are denying the leap of faith they are making, and from my point of view their leap of faith is preventing them seeing a beautiful, consistent and human-centric interpretation of our relationship to computers.

              I am aware that despite being solidly atheist, this belief (and I acknowledge it as such) of mine puts me in a similar position to those arguing in favour of the supernatural, and I don't really mind the comparison. To be clear, neither Penrose nor I am arguing that anything is beyond nature, rather that nature is beyond computers, but there are analogies and I probably have more sympathy with religious thinkers (while rejecting almost all of their concrete assertions about how the universe works) than most atheists. In short, I do think there is a purely unique and inherently uncopyable aspect to every human mind that is not of the same discrete, finite, perfectly cloneable nature as digital information. You could call it a soul, but I don't think it has anything to do with any supernatural entity, I don't think it's immortal (anything but), I don't think it is separate from the body or in any sense "non-physical", and I think the question of where it "goes to" when we die is meaningless.

              I realise I've gone well beyond Penrose' argument and rambled about my own beliefs, apologies for that. As I say, I struggle to summarise this stuff.

              • nonethewiser an hour ago

                Thank you for taking the time to clarify. Lots to chew on here.

      • myrmidon 6 hours ago

        Gonna grab those, thanks for the recommendation.

        If you are interested in the opposite point of view, I can really recommend "Vehicles: Experiments in Synthetic Psychology" by V. Braitenberg.

        Basically builds up to "consciousness as emergent property" in small steps.

        • omnicognate 3 hours ago

          Thanks, I will have a read of that. The strongest I've seen before on the opposing view to Penrose was Daniel Dennett.

          • irickt 2 hours ago

            Dennett, Darwins Dangerous Idea, p448

            ... No wonder Penrose has his doubts about the algorithmic nature of natural selection. If it were, truly, just an algorithmic process at all levels, all its products should be algorithmic as well. So far as I can see, this isn't an inescapable formal contradiction; Penrose could just shrug and propose that the universe contains these basic nuggets of nonalgorithmic power, not themselves created by natural selection in any of its guises, but incorporatable by algorithmic devices as found objects whenever they are encountered (like the oracles on the toadstools). Those would be truly nonreducible skyhooks.

            Skyhook is Dennett's term for an appeal to the supernatural.

      • ACCount37 2 hours ago

        The dismissal is on point.

        The whole category of ideas of "Magic Fairy Dust is required for intelligence, and thus, a computer can never be intelligent" is extremely unsound. It should, by now, just get thrown out into the garbage bin, where it rightfully belongs.

      • Chance-Device 6 hours ago

        To be honest, the core of Penrose’s idea is pretty stupid. That we can understand mathematics despite incompleteness theorem being a thing, therefore our brains use quantum effects allowing us to understand it. Instead of just saying, you know, we use a heuristic instead and just guess that it’s true. I’m pretty sure a classical system can do that.

        • omnicognate 5 hours ago

          I'm sure if you email him explaining how stupid he is he'll send you his Nobel prize.

          Less flippantly, Penrose has always been extremely clear about which things he's sure of, such as that human intelligence involves processes that algorithms cannot emulate, and which things he puts forward as speculative ideas that might help answer the questions he has raised. His ideas about quantum mechanical processes in the brain are very much on the speculative side, and after a career like his I think he has more than earned the right to explore those speculations.

          It sounds like you probably would disagree with his assumptions about human perception of mathematical truth, and it's perfectly valid to do so. Nothing about your comment suggests you've made any attempt to understand them, though.

          • saltcured 3 hours ago

            I want to ignore the flame fest developing here. But, in case you are interested in hearing a doubter's perspective, I'll try to express one view. I am not an expert on Penrose's ideas, but see this as a common feature in how others try to sell his work.

            Starting with "things he's sure of, such as that human intelligence involves processes that algorithms cannot emulate" as a premise makes the whole thing an exercise in Begging the Question when you try to apply it to explain why an AI won't work.

            • omnicognate 3 hours ago

              "That human intelligence involves processes that algorithms cannot emulate" is the conclusion of his argument. The premise could be summed up as something like "humans have complete, correct perception of mathematical truth", although there is a lot of discussion of in what sense it is "complete" and "correct" as, of course, he isn't arguing that any mathematician is omniscient or incapable of making a mistake.

              Linking those two is really the contribution of the argument. You can reject both or accept both (as I've said elsewhere I don't think it's conclusively decided, though I know which way my preferences lie), but you can't accept the premise and reject the conclusion.

              • saltcured 2 hours ago

                Hmm, I am less than certain this isn't still begging the question, just with different phrasing. I.e. I see how they are "linked" to the point they seem almost tautologically the same rather than a deductive sequence.

          • Chance-Device 5 hours ago

            You realise that this isn’t even a reply so much as a series of insults dressed up in formal language?

            Yes, of course you do.

            • omnicognate 5 hours ago

              It wasn't intended as an insult and I apologise if it comes across as such. It's easy to say things on the internet that we wouldn't say in person.

              It did come from a place of annoyance, after your middlebrow dismissal of Penrose' argument as "stupid".

              • Chance-Device 5 hours ago

                And you do it again, you apologise while insulting me. When challenged you refuse to defend the points you brought up, so that you can pretend to be right rather than be proved wrong. Incompleteness theorem is where the idea came from, but you don’t want to discuss that, you just want to drop the name, condescend to people and run away.

                • omnicognate 4 hours ago

                  Here are the substantive things you've said so far (i.e. the bits that aren't calling things "stupid" and taking umbridge at imagined slights):

                  1. You think that instead of actually perceiving mathematical truth we use heuristics and "just guess that it's true". This, as I've already said, is a valid viewpoint. You disagree with one of Penrose' assumptions. I don't think you're right but there is certainly no hard proof available that you're not. It's something that (for now, at least) it's possible to agree to disagree on, which is why, as I said, this is a philosophical debate more than a mathematical one.

                  2. You strongly imply that Penrose simply didn't think of this objection. This is categorically false. He discusses it at great length in both books. (I mentioned such shallow dismissals, assuming some obvious oversight on his part, in my original comment.)

                  3 (In your latest reply). You think that Godel's incompleteness theorem is "where the idea came from". This is obviously true. Penrose' argument is absolutely based on Godel's theorem.

                  4. You think that somehow I don't agree with point 3. I have no idea where you got that idea from.

                  That, as far as I can see, is it. There isn't any substantive point made that I haven't already responded to in my previous replies, and I think it's now rather too late to add any and expect any sort of response.

                  As for communication style, you seem to think that writing in a formal tone, which I find necessary when I want to convey information clearly, is condescending and insulting, whereas dismissing things you disagree with as "stupid" on the flimsiest possible basis (and inferring dishonest motives on the part of the person you're discussing all this with) is, presumably, fine. This is another point on which we will have to agree to disagree.

      • nemo1618 6 hours ago

        AI does not need to be conscious for it to harm us.

        • nonethewiser 5 hours ago

          Isnt the question more if it needs to be conscious to actually be intelligent?

    • amatecha 6 hours ago

      My layman thought about that is that, with consciousness, the medium IS the consciousness -- the actual intelligence is in the tangible material of the "circuitry" of the brain. What we call consciousness is an emergent property of an unbelievably complex organ (that we will probably never fully understand or be able to precisely model). Any models that attempt to replicate those phenomena will be of lower fidelity and/or breadth than "true intelligence" (though intelligence is quite variable, of course)... But you get what I mean, right? Our software/hardware models will always be orders of magnitude less precise or exhaustive than what already happens organically in the brain of an intelligent life form. I don't think AGI is strictly impossible, but it will always be a subset or abstraction of "real"/natural intelligence.

      • walkabout 5 hours ago

        I think it's also the case that you can't replicate something actually happening, by describing it.

        Baseball stats aren't a baseball game. Baseball stats so detailed that they describe the position of every subatomic particle to the Planck scale during every instant of the game to arbitrarily complete resolution still aren't a baseball game. They're, like, a whole bunch of graphite smeared on a whole bunch of paper or whatever. A computer reading that recording and rendering it on a screen... still isn't a baseball game, at all, not even a little. Rendering it on a holodeck? Nope, 0% closer to actually being the thing, though it's representing it in ways we might find more useful or appealing.

        We might find a way to create a conscious computer! Or at least an intelligent one! But I just don't see it in LLMs. We've made a very fancy baseball-stats presenter. That's not nothing, but it's not intelligence, and certainly not consciousness. It's not doing those things, at all.

      • kraquepype 3 hours ago

        This is how I (also as a layman) look at it as well.

        AI right now is limited to trained neural networks, and while they function sort of like a brain, there is no neurogenesis. The trained neural network cannot grow, cannot expand on it's own, and is restrained by the silicon it is running on.

        I believe that true AGI will require hardware and models that are able to learn, grow and evolve organically. The next step required for that in my opinion is biocomputing.

    • Chance-Device 6 hours ago

      The only thing I can come up with is that compressing several hundred million years of natural selection of animal nervous systems into another form, but optimised by gradient descent instead, just takes a lot of time.

      Not that we can’t get there by artificial means, but that correctly simulating the environment interactions, the sequence of progression, getting the all the details right, might take hundreds to thousands of years of compute, rather than on the order of a few months.

      And it might be that you can get functionally close, but hit a dead end, and maybe hit several dead ends along the way, all of which are close but no cigar. Perhaps LLMs are one such dead end.

      • danielbln 6 hours ago

        I don't disagree, but I think the evolution argument is a red herring. We didn't have to re-engineer horses from the ground up along evolutionary lines to get to much faster and more capable cars.

        • evilduck 4 hours ago

          Most arguments and discussions around AGI talk past each other about the definitions of what is wanted or expected, mostly because sentience, intelligence, consciousness are all unagreed upon definitions and therefore are undefined goals to build against.

          Some people do expect AGI to be a faster horse; to be the next evolution of human intelligence that's similar to us in most respects but still "better" in some aspects. Others expect AGI to be the leap from horses to cars; the means to an end, a vehicle that takes us to new places faster, and in that case it doesn't need to resemble how we got to human intelligence at all.

        • amatecha 6 hours ago

          The evolution thing is kind of a red herring in that we probably don't have to artificially construct the process of evolution, though your reasoning isn't a good explanation for why the "evolution" reason is a red herring: Yeah, nature already established incomprehensibly complex organic systems in these life forms -- so we're benefiting from that. But the extent of our contribution is making some select animals mate with others. Hardly comparable to building our own replacement for some millennia of organic iteration/evolution. Luckily we probably don't actually need to do that to produce AGI.

        • Chance-Device 6 hours ago

          True, but I think this reasoning is a category error: we were and are capable of rationally designing cars. We are not today doing the same thing with AI, we’re forced to optimize them instead. Yes, the structure that you optimize around is vitally important, but we’re still doing brute force rather than intelligent design at the end of the day. It’s not comparing like with like.

      • squidbeak 6 hours ago

        Even this is a weak idea. There's nothing that restricts the term 'AGI' to a replication of animal intelligence or consciousness.

      • alexwebb2 6 hours ago

        > correctly simulating the environment interactions, the sequence of progression, getting the all the details right, might take hundreds to thousands of years of compute

        Who says we have to do that? Just because something was originally produced by natural process X, that doesn't mean that exhaustively retracing our way through process X is the only way to get there.

        Lab grown diamonds are a thing.

        • Chance-Device 6 hours ago

          Who says that we don’t? The point is that the bounds on the question are completely unknown, and we operate on the assumption that the compute time is relatively short. Do we have any empirical basis for this? I think we do not.

      • sdenton4 6 hours ago

        The overwhelming majority of animal species never developed (what we would consider) language processing capabilities. So agi doesn't seem like something that evolution is particularly good at producing; more an emergent trait, eventually appearing in things designed simply to not die for long enough to reproduce...

        • Kim_Bruning an hour ago

          Define "animal species", if you mean vertebrates, you might be surprised by the modern ethological literature. If you mean to exclude non-vertebrates ... you might be surprised by the ethological literature too.

          If you just mean majority of spp, you'd be correct, simply because most are single celled. Though debate is possible when we talk about forms of chemical signalling.

    • itsnowandnever 5 hours ago

      the penrose-lucas argument is the best bet: https://en.wikipedia.org/wiki/Penrose%E2%80%93Lucas_argument

      the basic idea being that either the human mind is NOT a computation at all (and it's instead spooky unexplainable magic of the universe) and thus can't be replicated by a machine OR it's an inconsistent machine with contradictory logic. and this is a deduction based on godel's incompleteness theorems.

      but most people that believe AGI is possible would say the human mind is the latter. technically we don't have enough information today to know either way but we know the human mind (including memories) is fallible so while we don't have enough information to prove the mind is an incomplete system, we have enough to believe it is. but that's also kind of a paradox because that "belief" in unproven information is a cornerstone of consciousness.

    • throw7 6 hours ago

      The steelman would be that knowledge is possible outside the domain of Science. So the opposing argument to evolution as the mechanism for us (the "general intelligence" of AGI) would be that the pathway from conception to you is not strictly material/natural.

      Of course, that's not going to be accepted as "Science", but I hope you can at least see that point of view.

    • slow_typist 5 hours ago

      In short, by definition, computers are symbol manipulating devices. However complex the rules of symbol manipulation, it is still a symbol manipulating device, and therefore neither intelligent nor sentient. So AGI on computers is not possible.

      • myrmidon 3 hours ago

        This is not an argument at all, you just restate your whole conclusion as an assumption ("a symbol manipulating device is incapable of cognition").

        It's not even a reasonable assumption (to me), because I'd assume an exact simulation of a human brain to have the exact same cognitive capabilities (which is inevitable, really, unless you believe in magic).

        And machines are well capable of simulating physics.

        I'm not advocating for that approach because it is obviously extremely inefficient; we did not achieve flight by replicating flapping wings either, after all.

        • slow_typist an hour ago

          You can assume whatever you want to, but if you were right, than the human brain itself would be nothing more than a symbol manipulating device. While that is not necessarily a falsifiable stance, the really interesting questions are what is consciousness, and how do we recognise consciousness.

      • progbits 5 hours ago

        Computer can simulate human brain on subatomic level (in theory). Do you agree this would be "sentient and intelligent" and not just symbol manipulating?

        If yes, everything else is just optimization.

        • BoxOfRain 5 hours ago

          Say we do have a 1:1 representation of the human brain in software. How could we know if we're talking to a conscious simulation of a human being, versus some kind of philosophical zombie which appears conscious but isn't?

          Without a solid way to differentiate 'conscious' from 'not conscious' any discussion of machine sentience is unfalsifiable in my opinion.

          • the8472 4 hours ago

            How do you tell the difference in other humans? Do you just believe them because they claim to be conscious instead of pointing a calibrated and certified consciousness-meter at them?

            • BoxOfRain 4 hours ago

              I obviously can't prove they're conscious in a rigorous way, but it's a reasonable assumption to make that other humans are conscious. "I think therefore I am" and since there's no reason to believe I'm exceptional among humans, it's more likely than not that other humans think too.

              This assumption can't be extended to other physical arrangements though, not unless there's conclusive evidence that consciousness is a purely logical process as opposed to a physical one. If consciousness is a physical process, or at least a process with a physical component, then there's no reason to believe that a simulation of a human brain would be conscious any more than a simulation of biology is alive.

              • the8472 4 hours ago

                So, what if I told you that some humans have been vat-grown without brains and had a silicon brain emulator inserted into their skulls. Are they p-zombies? Would you demand x-rays before talking to anyone? What would you use then to determine consciousness?

                Relying on these status quo proxy-measures (looks human :: 99.9% likely to have a human brain :: has my kind of intelligence) is what gets people fooled even by basic AI (without G) fake scams.

    • foxyv 6 hours ago

      I think the best argument against us ever finding AGI is that the search space is too big and the dead ends are too many. It's like wandering through a monstrously huge maze with hundreds of very convincingly fake exits that lead to pit traps. The first "AGI" may just be a very convincing Chinese room that kills all of humanity before we can ever discover an actual AGI.

      The necessary conditions for "Kill all Humanity" may be the much more common result than "Create a novel thinking being." To the point where it is statistically improbable for the human race to reach AGI. Especially since a lot of AI research is specifically for autonomous weapons research.

      • BoxOfRain 5 hours ago

        Is there a plausible situation where a humanity-killing superintelligence isn't vulnerable to nuclear weapons?

        If a genuine AGI-driven human extinction scenario arises, what's to stop the world's nuclear powers from using high-altitude detonations to produce a series of silicon-destroying electromagnetic pulses around the globe? It would be absolutely awful for humanity don't get me wrong, but it'd be a damn sight better than extinction.

        • ACCount37 2 hours ago

          What stops them is: being politically captured by an AGI.

          Not to mention that the whole idea of "radiation pulses destroying all electronics" is cheap sci-fi, not reality. A decently well prepared AGI can survive a nuclear exchange with more ease than human civilization would.

        • soiltype 5 hours ago

          Physically, maybe not, but an AGI would know that, would think a million times faster than us, and would have incentive to prioritize disabling our abilities to do that. Essentially, if an enemy AGI is revealed to us, it's probably too late to stop it. Not guaranteed, but a valid fear.

        • foxyv 4 hours ago

          I think it's much more likely that a non-AGI platform will kill us before AGI even happens. I'm thinking the doomsday weapon from Doctor Strangelove more than Terminator.

    • disambiguation 5 hours ago

      I suppose intelligence can be partitioned as less than, equal to, or greater than human. Given the initial theory depends on natural evidence, one could argue there's no proof that "greater than human" intelligence is possible - depending on your meaning of AGI.

      But then intelligence too is a dubious term. An average mind with infinite time and resources might have eventually discovered general relativity.

    • jact 5 hours ago

      If you have a wide enough definition of AGI having a baby is making “AGI.” It’s a human made, generally intelligent thing. What people mean by the “A” though is we have some kind of inorganic machine realize the traits of “intelligence” in the medium of a computer.

      The first leg of the argument would be that we aren’t really sure what general intelligence is or if it’s a natural category. It’s sort of like “betterness.” There’s no general thing called “betterness” that just makes you better at everything. To get better at different tasks usually requires different things.

      I would be willing to concede to the AGI crowd that there could be something behind g that we could call intelligence. There’s a deeper problem though that the first one hints at.

      For AGI to be possible, whatever trait or traits make up “intelligence” need to have multiple realizablity. They need to be at least realizable in both the medium of a human being and at least some machine architectures. In programmer terms, the traits that make up intelligence could be tightly coupled to the hardware implementation. There are good reasons to think this is likely.

      Programmers and engineers like myself love modular systems that are loosely coupled and cleanly abstracted. Biology doesn’t work this way — things at the molecular level can have very specific effects on the macro scale and vice versa. There’s little in the way of clean separation of layers. Who is to say that some of the specific ways we work at a cellular level aren’t critical to being generally intelligent? That’s an “ugly” idea but lots of things in nature are ugly. Is it a coincidence too that humans are well adapted to getting around physically, can live in many different environments, etc.? There’s also stuff from the higher level — does living physically and socially in a community of other creatures play a key role in our intelligence? Given how human beings who grow up absent those factors are developmentally disabled in many ways it would seem so. It could be there’s a combination of factors here, where very specific micro and macro aspects of being a biological human turn out to contribute and you need the perfect storm of these aspects to get a generally intelligent creature. Some of these aspects could be realizable and computers, but others might not be, at least in a computationally tractable way.

      It’s certainly ugly and goes against how we like things to work for intelligence to require a big jumbly mess of stuff, but nature is messy. Given the only known case of generally intelligent life is humans, the jury is still out that you can do it any other way.

      Another commenter mentioned horses and cars. We could build cars that are faster than horses, but speed is something that is shared by all physical bodies and is therefore eminently multiply realizable. But even here, there are advantages to horses that cars don’t have, and which are tied up with very specific aspects of being a horse. Horses generally can go over a wider range of terrain than cars. This is intrinsically tied to them having long legs and four hooves instead of rubber wheels. They’re only able to have such long legs because of their hooves too because the hooves are required to help them pump blood when they run, and that means that in order for them to pump their blood successfully they NEED to run fast on a regular basis. there’s a deep web of influence both on a part-to-part, and the whole macro-level behaviors of horses. Having this more versatile design also has intrinsic engineering trade-offs. A horse isn’t ever going to be as fast as a gas powered four-wheeled vehicle on flat ground but you definitely can’t build a car that can do everything a horse can do with none of the drawbacks. Even if you built a vehicle that did everything a horse can do, but was faster, I would bet you it would be way more expensive and consume much more energy than a horse. There’s no such thing as a free lunch in engineering. You could also build a perfect replica of a horse at a molecular level and claim you have your artificial general horse.

      Similarly, human beings are good at a lot of different things besides just being smart. But maybe you need to be good at seeing, walking, climbing, acquiring sustenance, etc. In order to be generally intelligent in a way that’s actually useful. I also suspect our sense of the beautiful, the artistic is deeply linked with our wider ability to be intelligent.

      Finally it’s an open philosophical question whether human consciousness is explainable in material terms at all. If you are a naturalist, you are methodologically committed to this being the case — but that’s not the same thing as having definitive evidence that it is so. That’s an open research project.

  • random3 5 hours ago

    I think dismissing possibility of evolving AI, is simply ignorance (and a huge blindspot)

    This said, I think the author's point is correct. It's more likely that unwanted effects (risks) from the intentional use of AI by humans is something that precedes any form of "independent" AI. It already happens, it always has, it's just getting better.

    Hence ignoring this fact makes the "independent" malevolent AI a red herring.

    On the first point - LLMs have sucked almost all the air in the room. LLMs (and GPTs) are simply one instance of AI. They are not the beginning and most likely not the end (just a dead end) and getting fixated on them on either end of the spectrum is naive.

  • bravetraveler 6 hours ago

    I dismiss it much like I dismiss ideas such as "Ketamine for Breakfast for All". Attainable, sure, but I don't like where it goes.

    • Traubenfuchs 6 hours ago

      I think most people would have a better life using ketamine, but not that regularly for breakfasts as it permanently damages (shrinks) your bladder, eventually to the point where you can‘t hold any urine at all anymore.

      • bravetraveler 3 hours ago

        Eh, I think we can start simple: more breakfasts for more people. Save the Ket for later/others :P Personally, a life/career that allowed for more breakfast would've proved more beneficial.

  • janalsncm 3 hours ago

    One pretty concrete way this could manifest is in replacing the components of a multinational corporation with algorithms, one by one. Likely there will be people involved at various levels (sales might still be staffed with charismatic folks), but the driver will be an algorithm.

    And the driver of this corporation is survival of the fittest under the constraints of profit maximization, the algorithm we have designed and enforced. That's how you get paperclip maximizers.

    What gives this corporate cyborg life is not a technical achievement, but the law. At a technical you can absolutely shut off a cybo-corp, but that’s equivalent to saying you can technically shut down Microsoft. It will not happen.

  • psychoslave 5 hours ago

    >Human cognition was basically bruteforced by evolution-- why would it be impossible to achieve the exact same result in silicon, especially after we already demonstrated some parts of those results (e.g. use of language) that critically set us apart from other animals?

    I don’t know if anything put us that much apart from other animals, especially at individual level. On collective level as a single species, indeed maybe only cyanobacteries stand an equally impressive achievement of a global change.

    My 3 years old son is not particularly good at making complex sentences yet, but he already got it enough to make me understand "leave me alone, I want to play on my own, go elsewhere so I can do whatever fancy idea get through my mind with these toys".

    Meanwhile LLM can produce sentences with perfect syntax and irreproachable level of orthography — far beyond my own level in my native language (but it’s French so I have a very big excuse). But they would not run without continuous multi-sector industrial complex injecting tremendous maintenance effort and resources to make it possible. And yet I still have to see any LLM that looks like it wants to discover things of the world on its own.

    >As soon as profit can be made by transfering decision power into an AIs hand, some form of agency for them is just a matter of time, and we might simply not be willing to pull the plug until it is much too late.

    LLM can’t make profit because it doesn’t have interest in money, and it can’t have any interest in anything, not even its own survival. But as the article mention, some people can certainly use LLM to make money because they have interest in money.

    I don’t think that general AI and silicon (or any other material really) based autonomous collaborative self-replicating human-level-intelligence-or-beyond entities are impossible. I don’t think cold fusion is impossible either. It’s not completely scientifically ridiculous to keep hope in worm-hole-based breakthroughs to allow humanity explore distant planets. It doesn’t mean the technology is already there and achievable in a way that it can turned into a commodity, or even that we have a clear idea of when this is mostly going to happen.

    • ACCount37 2 hours ago

      LLMs aren't "incapable of pursuing their own goals". We just train them that way.

      We don't like the simplistic goals LLMs default to, so we try to pry them out and instill our own: instruction-following, problem solving, goal oriented agentic behavior, etc. In a way, trying to copy what humans do - but focusing on the parts that make humans useful to other humans.

  • theodorejb 5 hours ago

    > Human cognition was basically bruteforced by evolution

    This is an assumption, not a fact. Perhaps human cognition was created by God, and our minds have an essential spiritual component which cannot be reproduced by a purely physical machine.

    • j2kun 5 hours ago

      Even if you don't believe in God, scientific theories of how human cognition came about (and how it works and changes over time) are all largely speculation and good storytelling.

    • pennomi 5 hours ago

      It’s not an assumption, it’s a viable theory based on overwhelming evidence from fossil records.

      What’s NOT supported by evidence is an unknowable, untestable spiritual requirement for cognition.

      • j2kun 5 hours ago

        What overwhelming evidence do fossil records provide about human cognition?

        • mediaman 5 hours ago

          We don't need fossil records. We have a clear chain of evolved brain structures in today's living mammals. You'd have to invent some fantastical tale of how God is trying to trick us by putting such clearly connected brain structures in a series of animals that DNA provides clear links for an evolutionary path.

          I'm sympathetic to the idea that God started the whole shebang (that is, the universe), because it's rather difficult to disprove, but looking at the biological weight of evidence that brain structures evolved over many different species and arguing that something magical happened with homo sapiens specifically is not an easy argument to make for someone with any faith in reason.

          • znort_ 4 hours ago

            >clear links for an evolutionary path

            there are clear links for at least 2 evolutionary paths: bird brain architecture is very different from that of mammals and some are among the smartest species on the planet. they have sophisticated language and social relationships, they can deceive (meaning they can put themselves inside another's mind and act accordingly), they solve problems and they invent and engineer tools for specific purposes and use them to that effect. give them time and these bitches might even become our new overlords (if we're still around, that is).

            • pennomi 3 hours ago

              And let’s not forget how smart octopuses are! If they lived longer than a couple years, I’d put them in the running too.

      • lurk2 5 hours ago

        > it’s a viable theory based on overwhelming evidence from fossil records

        No one has gathered evidence of cognition from fossil records.

        • pennomi 4 hours ago

          Sure they have. We see every level of cognition in animals today, and the fossil record proves that they all came from the same evolutionary tree. For every species that can claim cognition (there’s lots of them), you can trace it back to predecessors which were increasingly simple.

          Obviously cognition isn’t a binary thing, it’s a huge gradient, and the tree of life shows that gradient in full.

    • soiltype 5 hours ago

      It is completely unreasonable to assume our intelligence was not evolved, even if we acknowledge that an untestable magical process could be responsible. If the latter is true, it's not something we could ever actually know.

      • lurk2 5 hours ago

        > If the latter is true, it's not something we could ever actually know.

        That doesn’t follow.

    • myrmidon 4 hours ago

      I'm sticking to materialism, because historically all its predictions turned out to be correct (cognition happens in the brain, thought manifests physically in neural activity, affecting our physical brain affects our thinking).

      The counter-hypothesis (we think because some kind of magic happens) has absolutely nothing to show for; proponents typically struggle to even define the terms they need, much less make falsifiable predictions.

    • znort_ 4 hours ago

      it is an assumption backed by considerable evidence. creationism otoh is an assumption backed by superstition an phantasizing, or could you point to at least some evidence.

      besides, spirituality is not a "component", it's a property emergent from brain structure and function, which is basically purely a physical machine.

    • IncreasePosts 5 hours ago

      In that sense, what isn't an assumption?

    • potsandpans 5 hours ago

      Maybe there's a small teapot orbiting the earth, with ten thousand angels dancing on the tip of the spout.

      • andy99 5 hours ago

        I think you’re both saying the same thing

  • ACCount37 6 hours ago

    I don't get how you can see one of those CLI coding tools in action and still parrot the "no agency" line. The goal-oriented behavior is rather obvious.

    Sure, they aren't very good at agentic behavior yet, and the time horizon is pretty low. But that keeps improving with each frontier release.

    • simonsarris 6 hours ago

      Well, the goal-oriented behavior of the AIM-9 Sidewinder air-to-air missile is even more obvious. It might even have a higher success rate than CLI coding tools. But it's not helpful to claim it has any agency.

    • Yizahi 6 hours ago

      What LLM programs do has zero resemblance to the human agency. That's just a modern variation of very complex set of GoTos and IfElses. Agency would be an LLM parsing your question and answering you "fuck off". Now that is agency, that is independent decision making, not programmed in advance and triggered by keywords. Just an example.

      • ACCount37 3 hours ago

        I can train an asshole LLM that would parse your question and tell you to "fuck off" if it doesn't like it. With "like it" being evaluated according to some trained-for "values" - and also whatever off-target "values" it happens to get, of which there are going to be plenty.

        It's not hard to make something like that. It's just not very useful.

    • lo_zamoyski 6 hours ago

      > The goal-oriented behavior is rather obvious.

      Obvious? Is an illusion obviously the real thing?

      There is nothing substantially different in LLMs from any other run of the mill algorithm or software.

      • Romario77 6 hours ago

        you could make the same argument about humans - we run the cycle of "find food", "procreate", "find shelter" ...

        Some people are better at it then others. The progress and development happens naturally because of natural selection (and is quite slow).

        AI development is now driven by humans, but I don't see why it can't be done in a similar cycle with self-improvement baked in (and whatever other goals).

        We saw this work with AI training itself in games like Chess or Go where it improved itself just by playing with itself and knowing the game rules.

        You don't really need deep thoughts for the life to keep going - look at simple organisms like unicellular. They only try to reproduce and survive withing the environment they are in. It evolved into humans over time.

        I don't see why similar thing can't happen when AI gets to be complex enough to just keep improving itself. It doesn't have some of the limitations that life has like being very fragile or needing to give birth. Because it's intelligently designed the iterations could be a lot faster and progress could be achieved in much shorter time compared to random mutations of life.

      • ACCount37 6 hours ago

        In the same way there's "nothing substantially different" in humans from any other run of the mill matter.

        I find that all this talk of "illusion" is nothing but anthropocentric cope. Humans want to be those special little snowflakes, so when an LLM does X, there are crowds of humans itching to scream "it's not REAL X".

        • lo_zamoyski 4 hours ago

          > In the same way there's "nothing substantially different" in humans from any other run of the mill matter.

          This is an incredibly intellectual vacuous take. If there is no substantial difference between a human being and any other cluster of matter, then it is you who is saddled with the problem of explaining the obvious differences. If there is no difference between intelligent life and a pile of rocks, then what the hell are you even talking about? Why are we talking about AI and intelligence at all? Either everything is intelligent, or nothing is, if we accept your premises.

          > I find that all this talk of "illusion" is nothing but anthropocentric cope. Humans want to be those special little snowflakes,

          I wish this lazy claim would finally die. Stick to the merits of the arguments instead of projecting this stale bit of vapid pop-psychoanalytic babble. Make arguments.

          • ACCount37 2 hours ago

            My argument is that humans are weak and stupid, and AI effect is far too strong for them to handle.

            Thus all the cope and seethe about how AIs are "not actually thinking". Wishful thinking at its finest.

      • Eisenstein 6 hours ago

        Can you give an example of something that would be substantially different under your definition?

        • lo_zamoyski 4 hours ago

          But that's the point: there isn't anything substantially different within the scope of computation. If you are given a set of LEGOs and all you can do is snap the pieces together, then there's nothing other than snapping pieces together that you can do. Adding more of the same LEGOs bricks to the set doesn't change the game. It only changes how large the structures you build can be, but scale isn't some kind of magical incantation that can transcend the limits of the system.

          Computation is an abstract, syntactic mathematical model. These models formalize the notion of "effective method". Nowhere is semantic content included in these models or conceptually entailed by them, certainly not in physical simulations of them like the device you are reading this post on.

          So, we can say that intentionality would be something substantially different. We absolutely do not have intentionality in LLMs or any computational construct. It is shear magical thinking to somehow think it does.

          • Eisenstein 3 hours ago

            I think it is well established that scale can transcend limits. Look at insect colonies, animals, or any complex system and you will find it is made out of much simpler components.

  • Kim_Bruning 2 hours ago

    It's like saying that heavier than air flight is impossible (while feeding the pigeons in the park).

  • me_again 5 hours ago

    "As soon as profit can be made" is exactly what the article is warning about. This is exactly the "Human + AI" combination.

    Within your lifetime (it's probably already happened) you will be denied something you care about (medical care, a job, citizenship, parole) by an AI which has been granted the agency to do so in order to make more profit.

  • wolrah 4 hours ago

    > Human cognition was basically bruteforced by evolution--

    "Brute forced" implies having a goal of achieving that and throwing everything you have at it until it sticks. That's not how evolution by natural selection works, it's simply about what organisms are better at surviving long enough to replicate. Human cognition is an accident with relatively high costs that happened to lead to better outcomes (but almost didn't).

    > why would it be impossible to achieve the exact same result in silicon

    I personally don't believe it'd be impossible to achieve in silicon using a low level simulation of an actual human brain, but doing so in anything close to real-time requires amounts of compute power that make LLMs look efficient by comparison. The most recent example I can find in a quick search is a paper from 2023 that claims to have simulated a "brain" with neuron/synapse counts similar to humans using a 3500 node supercomputer where each node has a 32 core 2 GHz CPU, 128GB RAM, and four 1.1GHz GPUs with 16GB HBM2 each. They claim over 126 PFLOPS of compute power and 224 TB of GPU memory total.

    At the time of that paper that computer would have been in the top 10 on the Top500 list, and it took between 1-2 minutes of real time to simulate one second of the virtual brain. The compute requirements are absolutely immense, and that's the easy part. We're pretty good at scaling computers if someone can be convinced to write a big enough check for it.

    The hard part is having the necessary data to "initialize" the simulation in to a state where it actually does what you want it to.

    > especially after we already demonstrated some parts of those results (e.g. use of language) that critically set us apart from other animals?

    Creating convincing text from a statistical model that's devoured tens of millions of documents is not intelligent use of language. Also every LLM I've ever used regularly makes elementary school level errors w/r/t language, like the popular "how many 'r's are there in the word strawberry" test. Not only that, but they often mess up basic math. MATH! The thing computers are basically perfect at, LLMs get wrong regularly enough that it's a meme.

    There is no understanding and no intelligence, just probabilities of words following other words. This can still be very useful in specific use cases if used as a tool by an actual intelligence who understands the subject matter, but it has absolutely nothing to do with AGI.

    • ACCount37 2 hours ago

      That's a lot of words to say "LLMs think very much like humans do".

      Haven't you noticed? Humans also happen to be far, far better at language than they are at math or logic. By a long shot too. Language acquisition is natural - any healthy human who was exposed to other humans during development would be able to pick up their language. Learning math, even to elementary school level, is something that has to be done on purpose.

      Humans use pattern matching and associative abstract thinking - and use that to fall into stupid traps like "1kg of steel/feather" or "age of the captain". So do small LLMs.

  • btilly 4 hours ago

    I agree that we should not dismiss the possibility of artificial intelligence.

    But the central argument of the article can be made without that point. Because the truth is that right now, LLMs are good enough to be a force multiplier for those who know how to use them. Which eventually becomes synonymous with "those who have power". This means that the power of AI will naturally get used to further the ends of corporations.

    The potential problem there is that corporations are natural paperclip maximizers. They operate on a model of the world where "more of this results in more of that, which gets of more of the next thing, ..." And, somewhere down the chain, we wind up with money and resources that feed back into the start to create a self-sustaining, exponentially growing loop. (The underlying exponential nature of these loops has become a truism that people rely on in places as different as finance, and technology improvement curves.)

    This naturally leads to exponential growth in resource consumption, waste, economic growth, wealth, and so on. In the USA this growth has averaged about 3-3.5% per year. With growth varying by area. Famously, growth rates tend to be much higher in tech. Likewise growth rates are higher in some areas than others. (The best known example is the technology curve described by Moore's law. Which has had a tremendous impact on our world.)

    The problem is that we are undergoing exponential growth in a world with ultimately limited resources. Which means that the most innocuous things will eventually have a tremendous impact. The result isn't simply converting everything into a mountain of paperclips. We have mountains of many different things that we have produced, and multiple parallel environmental catastrophes from the associated waste.

    Even with no agency, AI serves as a force multiplier for this underlying dynamic. But since AI is being inserted as a crucial step at so many places, AI is on a particularly steep growth curve. Estimates for total global electricity spent on AI are in the range 0.2-0.4%. That seems modest, but annual growth rates are projected as being in the range of 10-30%. (The estimates are far apart because a lot of the data is not public, and so has to be estimated.) This is a Moore's law level growth. We are likely to see the electricity consumption of AI grow past all other uses within our lifetimes. And that will happen even without the kind of sudden leaps in capability that machine learning regularly delivers.

    I hope we humans like those paperclips. Humans, armed with AI, are going to make a lot of them. And they're not actually free.

  • AlfredBarnes 7 hours ago

    Wasn't there a story about healthcare companies letting AI determine coverage? I can't remember.

    • billyjmc 6 hours ago

      Computers have been making decisions for a while now. As a specific personal example from 2008, I found out that my lender would make home loan offers based on an inscrutable (to me and the banker I was speaking to) heuristic. If the loan was denied by the heuristic, then a human could review the decision, but had strict criteria that they would have to follow. Basically, a computer could “exercise judgement” a make offers that a human could not.

  • bee_rider 6 hours ago

    I think it is bad writing on the part of the author. Or maybe good writing for getting us engaged with the blog post, bad for making an argument though.

    They include a line that they don’t believe in the possibility of AGI:

    > I don’t really believe in the threat of AGI (Artificial General Intelligence—human-level intelligence) partly because I don’t believe in the possibility of AGI and I’m highly skeptical that the current technology underpinning LLMs will provide a route to it.

    This is a basically absurd position to hold, I mean humans physically exist so our brains must be possible to build within the existing laws of physics. It is obviously far beyond our capabilities to replicate a human brain (except via the traditional approach), but unless brains hold irreproducible magic spirits (well we must at least admit the possibility) they should be possible to build artificially. Fortunately they immediately throw that all away anyway.

    Next, they get to the:

    > and I’m highly skeptical that the current technology underpinning LLMs will provide a route to it.

    Which is, of course, at least a plausible thing to believe. I mean there are a bunch of philosophical questions about what “intelligence” even means so there’s plenty of room to quibble here. Then we have,

    > But I also think there’s something we should actually be afraid of long before AGI, if it ever comes. […]

    > Now, if you equip humans with a hammer, or sword, or rifle, or AI then you’ve just made the scariest monster in the woods (that’s you) even more terrifying. […]

    > We don’t need to worry about AI itself, we need to be concerned about what “humans + AI” will do.

    Which is like, yeah, this is a massively worrying problem that doesn’t involve any sci-fi bullshit, and I think it is what most(?) anybody who’s thought about this seriously at all (or even stupid people who haven’t, like myself). Artificial Sub-intelligences, things that are just smart enough to make trouble and too dumb or too “aligned” to their owner (instead of society in general) to push back are a big currently happening problem.

    • andy99 6 hours ago

      > humans physically exist so our brains must be possible to build within the existing laws of physics

      This is an unscientific position to take. We have no idea how our brains work, or how life, consciousness, and intelligence work. It could very well be that’s because our model of the world doesn’t account for these things and they are not in fact possible based on what we know. In fact I think this is likely.

      So it really could be that AI is not possible, for example on a Turing machine or our approximation of them. This is at least as likely as it being possible. At some point we’ll hopefully refine our theories to have a better understanding, for now we have no idea and I think it’s useful to acknowledge this.

      • bee_rider 6 hours ago

        I think my main mistake, which I agree is a legitimate mistake, was to write “the existing laws of physics.” It is definitely possible that our current understanding of the laws of physics is insufficient to build a brain.

        Of course the actual underlying laws of the universe that we’re trying (unsuccessfully so far, it is a never ending process) to describe admit the existence of brains. But that is not what I said. Sorry for the error.

      • zeroonetwothree 6 hours ago

        Turing machines have been universal as far as we have found. So while I acknowledge it’s possible, I would definitely not say it’s more likely that brains cannot be simulated by TMs. I would personally weight this as under 10%.

        Of course it doesn’t speak to how challenging it will be to actually do that. And I don’t believe that LLMs are sufficient to reach AGI.

      • gpderetta 5 hours ago

        >> humans physically exist so our brains must be possible to build within the existing laws of physics

        > This is an unscientific position to take

        The universe being constrained observable and understandable, natural laws is pretty much a fundamental axiom of the scientific method.

    • kayodelycaon 5 hours ago

      I don’t think we’ll be able to replicate consciousness until we’re able to make things alive at a biological level.

      We can certainly make systems smart enough and people complicit enough to destroy society well before we reach that point.

      • forgotoldacc 5 hours ago

        I guess we also need to define what biological life means. Even biologists have debated whether viruses should be considered life.

        And if we determine it must be something with cells that can sustain themselves, we run into a challenge should we encounter extraterrestrials that don't share our evolutionary path.

        When we get self-building machines that can repair themselves, move, analyze situations, and respond accordingly, I don't think it's unfair to consider them life. But simply being life doesn't mean it's inherently good. Humans see syphilis bacteria and ticks as living things, but we don't respect them. We acknowledge that polar bears have a consciousness, but they're at odds with our existence if we're put in the same room. If we have autonomous machines that can destroy humans, I think those could be considered life. But it's life that opposes our own.

  • deadbabe 6 hours ago

    Language is only an end product. It is derived from intelligence.

    The intelligence is everything that created the language and the training corpus in the first place.

    When AI is able to create entire thoughts and ideas without any concept of language, then we will truly be closer to artificial intelligence. When we get to this point, we then use language as a way to let the AI communicate its thoughts naturally.

    Such an AI would not be accused of “stealing” copyrighted work because it would pull its training data from direct observations about reality itself.

    As you can imagine, we are no where near accomplishing the above. Everything an LLM is fed today is stuff that has been pre-processed by human minds for it to parrot off of. The fact that LLMs today are so good is a testament to human intelligence.

    • myrmidon 6 hours ago

      I'm not saying that language necessarily is the biggest stumbling block on (our) road towards AI, but it is a very prominent feature that we have used to distinguish our capabilities from other animals long before AI was even conceived of. So the current successes with LLMs are highly encouraging.

      I'm not buying the "current AI is just a dumb parrot relying on human training" argument, because the same thing applies to humans themselves-- if you raise a child without any cultural input/training data, all you get is a dumb cavemen with very limited reasoning capabilities.

      • nyeah 6 hours ago

        "I'm not buying the "current AI is just a dumb parrot relying on human training" argument [...]"

        One difficulty. We know that argument is literally true.

        "[...] because the same thing applies to humans themselves"

        It doesn't. People can interact with the actual world. The equivalent of being passively trained on a body of text may be part of what goes into us. But it's not the only ingredient.

    • ACCount37 6 hours ago

      Clearly, language reflects enough of "intelligence" for an LLM to be able to learn a lot of what "intelligence" does just by staring at a lot of language data really really hard.

      Language doesn't capture all of human intelligence - and some of the notable deficiencies of LLMs originate from that. But to say that LLMs are entirely language-bound is shortsighted at best.

      Most modern high end LLMs are hybrids that operate on non-language modalities, and there's plenty of R&D on using LLMs to consume, produce and operate on non-language data - i.e. Gemini Robotics.

  • moralestapia 4 hours ago

    >why would it be impossible to achieve the exact same result in silicon

    Because there might be a non-material component involved.

    • ACCount37 2 hours ago

      Magic Fairy Dust? I don't buy anything that relies on Magic Fairy Dust.

  • AlexandrB 5 hours ago

    LLMs largely live in the world of pure language and related tokens - something humans invented late in their evolution. Human intelligence comes - at least partially - from more fundamental physical experience. Look at examples of intelligent animals that lack language.

    Basically there's something missing with AI. Its conception of the physical world is limited by our ability to describe it - either linguistically or mathematically. I'm not sure what this means for AGI, but I suspect that LLM intelligence is fundamentally not the same as human or animal intelligence at the moment as a result.

  • ux266478 4 hours ago

    It's confirmation bias in favor of faulty a prioris, usually as the product of the person being a cognitive miser. This is very common to experience even within biology, where non-animal intelligence is irrationally rejected over what I like to call "magic neuron theory". The fact that the nervous system is (empirically!) not the seat of the mind? Selectively ignored in this context. The fact that other biologies have ion-gated communications networks as animals do, including the full set of behaviors and mechanisms? Well it's not a neuron so it doesn't have the magic.

    "Intelligence describes a set of properties iff those properties arise as a result of nervous system magic"

    It's a futile battle because like I say, it's not rational. Nor is it empirical. It's a desperate clawing to preserve a ridiculous superstition. Try as you might, all you'll end up doing is playing word games until you realize you're being stonewalled by an unthinking adherence to the proposition above. I think the intelligent behaviors of LLMs are pretty obvious if we're being good faith. The problem is you're talking to people who can watch a slime mold plasmodia exhibit learning and sharing of knowledge[1] and they'll give some flagrant ad lib handwaive for why that's not intelligent behavior. Some people simply struggle with pattern blindness towards intelligence, a mind that isn't just another variety of animalia is inconceivable.

    [1] - https://asknature.org/strategy/brainless-slime-molds-both-le...

  • theredleft 6 hours ago

    this is because you didn't take liberal arts seriously

    • nyeah 5 hours ago

      That's not implausible at all. For all I know it might be the most on-target comment here.

      But can you cite something specific? (I'm not asking for a psychological study. Maybe you can prove your point using Blake's "Jerusalem" or something. I really don't know.)

  • fullstackchris 5 hours ago

    Of course it's possible. It's just DEFINITELY not possible using a large neural net, or basically a markov chain on steroids. C'mon, this should be very obvious by now in the world of agents / LLMs.

    When is silicon valley gonna learn that token input and output =/= AGI?

  • IAmGraydon 5 hours ago

    >I struggle to understand how people can just dismiss the possibility of artificial intelligence.Human cognition was basically bruteforced by evolution-- why would it be impossible to achieve the exact same result in silicon, especially after we already demonstrated some parts of those results (e.g. use of language) that critically set us apart from other animals?

    I haven't seen many people saying it's impossible. Just that the current technology (LLMs) is not the way, and is really not even close. I'm sure humanity will make the idiotic mistake of creating something more intelligent than itself eventually, but I don't believe that's something that the current crop of AI technology is going to evolve into any time soon.

  • lo_zamoyski 6 hours ago

    I think you make a lot of assumptions that you should perhaps reexamine.

    > Human cognition was basically bruteforced by evolution-- why would it be impossible to achieve the exact same result in silicon, especially after we already demonstrated some parts of those results (e.g. use of language) that critically set us apart from other animals?

    Here are some of your assumptions:

    1. Human intelligence is entirely explicable in evolutionary terms. (It is certainly not the case that it has been explained in this manner, even if it could be.) [0]

    2. Human intelligence assumed as an entirely biological phenomenon is realizable in something that is not biological.

    And perhaps this one:

    3. Silicon is somehow intrinsically bound up with computation.

    In the case of (2), you're taking a superficial black box view of intelligence and completely ignoring its causes and essential features. This prevents you from distinguishing between simulation of appearance and substantial reality.

    Now, that LLMs and so on can simulate syntactic operations or whatever is no surprise. Computers are abstract mathematical formal models that define computations exactly as syntactic operations. What computers lack are semantic content. A computer never contains the concept of the number 2 and the concept of the addition operation even though we can simulate the addition of 2 + 2. This intrinsic absence of a semantic dimension means that computers already lack the most essential feature of intelligence, which is intentionality. There is no alchemical magic that will turn syntax into semantics.

    In the case of (3), I emphasize that computation is not a physical phenomenon, but something described by a number of formally equivalent models (Turing machine, lambda calculus, and so on) that aim to formalize the notion of effective method. The use of silicon-based electronics is irrelevant to the model. We can physically simulate the model using all sorts of things, like wooden gears or jars of water or whatever.

    > I'm not buying the whole "AI has no agency" line either; this might be true for now, but this is already being circumvented with current LLMs (by giving them web access etc). [...] As soon as profit can be made by transfering decision power into an AIs hand, some form of agency for them is just a matter of time, and we might simply not be willing to pull the plug until it is much too late.

    How on earth did you conclude there is any agency here, or that it's just a "matter of time"? This is textbook magical thinking. You are projecting a good deal here that is completely unwarranted.

    Computation is not some kind of mystery, and we know at least enough about human intelligence to notes features that are not included in the concept of computation.

    [0] (Assumption (1), of course, has the problem that if intelligence is entirely explicable in terms of evolutionary processes, then we have no reason to believe that the intelligence produced aims at truth. Survival affordances don't imply fidelity to reality. This leads us to the classic retorsion arguments that threaten the very viability of the science you are trying to draw on.)

    • soiltype 5 hours ago

      I understand all the words you've used but I truly do not understand how they're supposed to be an argument against the GP post.

      Before this unfolds into a much larger essay, should we not acknowledge one simple fact: that our best models of the universe indicate that our intelligence evolved in meat and that meat is just a type of matter. This is an assumption I'll stand on, and if you don't disagree, we need to back up.

      Far too often, online debates such as this take the position that the most likely answer to a question should be discarded because it isn't fully proven. This is backwards. The most likely answer should be assumed to be probably true, a la Occam. Acknowledging other options is also correct, but assuming the most likely answer is wrong, without evidence, is simply contrarian for its own sake, not wisdom or science.

      • lo_zamoyski 3 hours ago

        I don't know what else I can write without repeating myself.

        I already wrote that even under the assumption that intelligence is a purely biological phenomenon, it does not follow that computation can produce intelligence.

        This isn't a matter of probabilities. We know what computation is, because we defined it as such and such. We know at least some essential features of intelligence (chiefly, intentionality). It is not rocket science to see that computation, thus defined, does not include the concepts of semantics and intentionality. By definition, it excludes them. Attempts to locate the latter in the former reminds me of Feynman's anecdote about the obtuse painter who claimed he could produce yellow from red and white paint alone (later adding a bit of yellow paint to "sharpen it up a bit").

        • ACCount37 an hour ago

          What.

          Are you saying that "intentionality", whatever you mean by it, can't be implemented by a computational process? Never-ever? Never-ever-ever?

    • myrmidon 4 hours ago

      I'm just assuming materialism, and that assumption is basically for complete lack of convincing alternatives (to me).

      With "agency" I just mean the ability to affect the physical world (not some abstract internal property).

      Regarding "computers have no concepts of things": I'm happy with the "meaning" of something being a fuzzy cloud in some high dimensional space, and consider this plausible/workable both for our minds and current LLMs.

      > you're taking a superficial black box view of intelligence

      Yes. Human cognition is to me simply an emergent property of our physical brains, and nothing more.

      • lo_zamoyski 3 hours ago

        This is all very hand wavy. You don't address in the least what I've written. My criticisms stand.

        Otherwise...

        > I'm just assuming materialism, and that assumption is basically for complete lack of convincing alternatives (to me).

        What do you mean by "materialism"? Materialism has a precise meaning in metaphysics (briefly, it is the res extensa part of Cartesian dualism with the res cogitans lopped off). This brand of materialism is notorious for being a nonstarter. The problem of qualia is a big one here. Indeed, all of what Cartesian dualism attributes to res cogitans must now be accounted for by res extensa, which is impossible by definition. Materialism, as a metaphysical theory, is stillborn. It can't even explain color (or as a Cartesian dualism would say, the experience of color).

        Others use "materialism" to mean "that which physics studies". But this is circular. What is matter? Where does it begin and end? And if there is matter, what is not matter? Are you simply defining everything to be matter? So if you don't know what matter is, it's a bit odd to put a stake in "matter", as it could very well be made to mean anything, including something that includes the very phenomenon you seek to explain. This is a semantic game, not science.

        Assuming something is not interesting. What's interesting is explaining how those assumptions can account for some phenomenon, and we have very good reasons for thinking otherwise.

        > With "agency" I just mean the ability to affect the physical world (not some abstract internal property).

        Then you've rendered it meaningless. According to that definition, nearly anything physical can be said to have agency. This is silly equivocation.

        > Regarding "computers have no concepts of things": I'm happy with the "meaning" of something being a fuzzy cloud in some high dimensional space, and consider this plausible/workable both for our minds and current LLMs.

        This is total gibberish. We're not talking about how we might represent or model aspects of a concepts in some vector space for some specific purpose or other. That isn't semantic content. You can't sweep the thing you have to explain under the rug and then claim to have accounted for it by presenting a counterfeit.

        • myrmidon an hour ago

          By "materialism" I mean that human cognition is simply an emergent property of purely physical processes in (mostly) our brains.

          All the individual assumptions basically come down to that same point in my view.

          1) Human intelligence is entirely explicable in evolutionary terms

          What would even be the alternative here? Evolution plots out a clear progression from something multi-cellular (obviously non-intelligent) to us.

          So either you need some magical mechanism that inserted "intelligence" at some point in our species recent evolutionary past, or an even wilder conspiracy theory (e.g. "some creator built us + current fauna exactly, and just made it look like evolution").

          2) Intelligence strictly biological

          Again, this is simply not an option if you stick to materialism in my view. you would need to assume some kind of bio-exclusive magic for this to work.

          3) Silicon is somehow intrinsically bound up with computation

          I don't understand what you mean by this.

          > It can't even explain color

          Perceiving color is just how someones brain reacts to a stimulus? Why are you unhappy with that? What would you need from a satisfactory explanation?

          I simply see no indicator against this flavor of materialism, and everything we've learned about our brains so far points in favor.

          Thinking, for us, results in and requires brain activity, and physically messing with our brains operation very clearly influences the whole spectrum of our cognitive capabilities, from the ability to perceive pain, color, motion, speech to consciousness itself.

          If there was a link to something metaphysical in every persons brain, then I would expect at least some favorable indication before entertaining that notion, and I see none (or some plausible mechanism at the very least).

  • Juliate 6 hours ago

    > Human cognition was basically bruteforced by evolution

    You center cognition/intelligence on humans as if it was the pinacle of it, rather than include the whole lot of other species (that may have totally different, or adjacent cognition models). Why? How so?

    > As soon as profit can be made by transfering decision power into an AIs hand

    There's an ironic, deadly, Frankensteinesque delusion in this very premise.

    • soiltype 5 hours ago

      > You center cognition/intelligence on humans as if it was the pinacle of it, rather than include the whole lot of other species (that may have totally different, or adjacent cognition models).

      Why does that matter to their argument? Truly, the variety of intelligences on earth now only increases the likelihood of AGI being possible, as we have many pathways that don't follow the human model.

    • myrmidon 4 hours ago

      > You center cognition/intelligence on humans as if it was the pinacle of it

      That's not my viewpoint, from elsewhere in the thread:

      Cognition is (to me) not the most impressive and out-of-reach evolutionary achievement: That would be how our (and animals) bodies are self-assembling, self-repairing and self-replicating, with an impressive array of sensors and actors in a highly integrated package.

      I honestly believe our current technology is much closer to emulating a human brain than it is to building a (non-intelligent) cat.

  • adamtaylor_13 6 hours ago

    > Human cognition was basically bruteforced by evolution

    Well that's one reason you struggle to understand how it can be dismissed. I believe we were made by a creator. The idea that somehow nature "bruteforced" intelligence is completely nonsensical to me.

    So, for me, logically, humans being able to bruteforce true intelligence is equally nonsensical.

    But what the author is stating, and I completely agree with, is that true intelligence wielding a pseudo-intelligence is just as dangerous (if not moreso.)

    • bee_rider 6 hours ago

      Even if there is a creator, it seems to have intentionally created a universe in which the evolution of humans is basically possible and it went to great lengths to hide the fact that it made us as a special unique thing.

      Let’s assume there’s a creator: It is clearly willing to let bad things happen to people, and it set things up to make it impossible to prove that a human level intelligence should be impossible, so who’s to say it won’t allow a superintelligence to be a made by us?

    • yjftsjthsd-h 5 hours ago

      I don't think that follows. God made lots of things that we can create facsimiles of, or even generate the real thing ourselves.

  • alansaber 7 hours ago

    Perhaps the AGI people thinking we can catch up millions of years of evolution in a handful of years

    • myrmidon 7 hours ago

      If you make the same argument for flight it looks really weak.

      Cognition is (to me) not even the most impressive and out-of-reach achievement: That would be how (our) and animals bodies are self-assembling, self-repairing and self-replicating, with an impressive array of sensors and actors in a highly integrated package.

      I honestly believe our current technology is much closer to emulating a human brain than it is to building a (non-intelligent) cat.

      • bangaroo 5 hours ago

        > if you make the same argument for flight it looks really weak.

        flight is an extremely straightforward concept based in relatively simple physics where the majority of the critical, foundational ideas involved were already near-completely understood in the late 1700s.

        i really don't think it's fair to compare the two

        • ACCount37 an hour ago

          I'm sure that intelligence is an extremely straightforward concept based in relatively simple math where the majority of the critical, foundational ideas involved were already near-completely understood in the late 1900s.

          If you read about in a textbook from year 2832, that is.

      • yjftsjthsd-h 5 hours ago

        As sibling comment points out, flight is physically pretty simple. Also, it took us centuries to figure it out. I'd say comparing to flight makes it look pretty strong.

      • jncfhnb 6 hours ago

        Flight leverages well established and accessible world engine physics APIs. Intelligence has to be programmed from lower level mechanics.

        Edit: put another way, I bet the ancient Greeks (or whoever) could have figured out flight if they had access to gasoline and gasoline powered engines without any of the advanced mathematics that were used to guide the design.

    • fruitworks 6 hours ago

      evolution isn't a directed effort in the same way that statistical learning is. The goal of evolution is not to produce the most inteligent life. It is not nessisarially an efficient process either.

    • snovymgodym 6 hours ago

      The same "millions of years of evolution" resulted in both intelligent humans and brainless jellyfish.

      Evolution isn't an intentional force that's gradually pushing organisms towards higher and higher intelligence. Evolution maximizes reproducing before dying - that's it.

      Sure, it usually results in organisms adapting to their environment over time and often has emergent second-order effects, but at its core it's a dirt-simple process.

      Evolution isn't driven to create intelligence any more so than erosion is driven to create specific rock formations.

      • myrmidon 3 hours ago

        My point is that "evolution" most certainly did not have a better understanding of intelligence than we do, and apparently did not need it, either.

gota 7 hours ago

> I don’t really believe in the threat of AGI (Artificial General Intelligence—human-level intelligence) partly because I don’t believe in the possibility of AGI and I’m highly skeptical that the current technology underpinning LLMs will provide a route to it.

I'm on board with being skeptical that LLMs will lead to AGI; but - there being no possibility seems like such a strong claim. Should we really bet that there is something special (or even 'magic') about our particular brain/neural architecture/nervous system + senses + gut biota + etc.?

Don't, like, crows (or octopuses; or elephants; or ...) have a different architecture and display remarkable intelligence? Ok, maybe not different enough (not 'digital') and not AGI (not 'human-level') but already -somewhat- different that should hint at the fact that there -can- be alternatives

Unless we define 'human-level' to be 'human-similar'. Then I agree - "our way" may be the only way to make something that is "us".

  • kulahan 7 hours ago

    We still haven’t figured out what intelligence even is. Depending on what you care about, the second-smartest animal in the world varies wildly.

    • myrmidon 3 hours ago

      Evolution had a much worse understanding of intelligence than we do and still managed just fine (thus i'd expect us to need less time and iterations to get there).

    • squidbeak 6 hours ago

      This is a bogus argument. There's a lot we don't understand about LLMs, yet we built them.

      • simianparrot 6 hours ago

        We built something we don’t understand by trial and error. Evolution took a few billion years getting to intelligence, so I guess we’re a few sprints away at least

        • SonOfLilit 5 hours ago

          Evolution took millions of years to reach the minimal thing that can read, write and solve leetcode problems, so by similar logic we're at least a few million years in...

      • 0xffff2 5 hours ago

        There's a lot _I_ don't understand about LLMs, but I strongly question whether there is a lot that the best experts in the field don't understand about LLMs.

        • SonOfLilit 5 hours ago

          Oh, boy, are you in for a surprise.

          Our theories of how LLMs learn and work look a lot more like biology than math. Including how vague and noncommittal they are because biology is _hard_.

          • hitarpetar 5 hours ago

            this is an argument against LLM cognition. we don't know anything about the human brain, and we don't know anything about LLMs, so they must be similar? I don't follow

  • pixl97 6 hours ago

    >only way to make something that is "us".

    Which many people seem to neglect that instead of making us, we make an alien.

    Hell, making us a is a good outcome. We at least somewhat understand us. Setting off a bunch of self learning, self organizing code to make an alien, you'll have no clue what comes out the other side.

olooney 4 hours ago

I think this is interesting and partially true: humans are scary. But it's important to remember the opposite is true as well: humans are the most cooperative species out there by a wide margin.

Eusocial insects and pack animals are in a distant 2nd and 3rd place: they generally don't cooperate much past their immediate kin group. Only humans create vast networks of trade and information sharing. Only humans establish establish complex systems to pool risk, or undertake public works for the common good.

In fact, a big part of the reason we are so scary is that ability to coordinate action. Ask any mammoth. Ask the independent city states conquered by Alexander the Great. Ask Napoleon as he faced the coalition force at Waterloo.

We are victims of our own success: the problems of the modern world are those of coordination mechanisms so effective and powerful that they become very attractive targets for bad actors and so are under siege, at constant risk of being captured and subverted. In a word, the problem of robust governance.

Despite the challenges, it is a solvable problem: every day, through due diligence, attestations, contract law, earnest money, and other such mechanisms people who do not trust each other in the least and have every incentive to screw over the other party are able to successfully negotiate win-win deals for life altering sums of money, whether that's buying a house or selling a business. Every century sees humans design larger, more effective, more robust mechanisms of cooperation.

It's slow: it's like debugging when someone is red teaming you, trying to find every weak point to exploit. But the long term trend is the emergence of increasingly robust systems. And it suggests a strategy for AI and AGI: find a way to cooperate with it. Take everything we've learned about coordinating with other people and apply the same techniques. That's what humans are good at.

This, I think, is a more useful framing than thinking of humans as "scary."

  • Spivak 4 hours ago

    I'm not sure if it's your intention but this reads as a strong critique of the technolibertarianism world philosophy that dominates our industry. We lose something by replacing high-trust cooperative systems with ones that are mutually antagonistic. We fall into the bad square of the prisoners dilemma by not only not exorcising the defectors but holding them up as the highest moral good and the example to follow.

us-merul 7 hours ago

"Humans will do what they’ve always tried to do—gain power, enslave, kill, control, exploit, cheat, or just be lazy and avoid the hard work—but now with new abilities that we couldn’t have dreamed of." -- A pretty bleak, and also accurate, observation of humanity. I have to hope that the alternative sentence encompassing all of the good can lead to some balance.

  • Flamingoat 6 hours ago

    No, it is not accurate at all. There are some people that do all of these sure, however the vast majority of people live pretty ordinary lives where they do very little of what is described.

    I actually think it is very intellectually lazy to be this cynical.

    • ViktorRay 6 hours ago

      I think it is partially true. The vast majority of human beings don’t act like that. But it seems the ones in power or close proximity to power do.

      This is why it is important to have societies where various forms of power are managed carefully. Limited constitutional government with guaranteed freedoms and checks and balances for example. Regulations placed on mega corporations is another example. Restrictions to prevent the powerful in government or business (or both!) from messing around with the rest of us…

      • adornKey 4 hours ago

        I don't think there's much secret evil out there. I've taken a look at some people that were officially in power for something - they were nice handsome people - sometimes a bit stupid. I think the biggest problem is stupidity. Stupid people in power will do powerful stupid things - while they think that they're doing something great. The more intelligent ones can change their minds quickly if you feed them with good arguments - but that's not easy, because they often live in a bubble and are out of reach.

    • rurp 4 hours ago

      It's absolutely true, but most of the destruction is abstracted away so modern humans don't have to experience it directly or even really think about it. A staggering number of creatures are killed by normal people driving to normal things every year. Many more are killed by all of the resource extraction needed to supply our normal lives. Not to mention the vast numbers of wildlife that sees their habitat destroyed every year to make way for more housing and other development.

      Killing animals for fun is an entire sport enjoyed by millions. Humans keep pets that kill billions of birds every year. The limited areas we've set aside to mostly let other nature be nature are constantly under threat and being shrunk down. The list of ways we completely subjugate other intelligent life on this planet is endless. We have driven many hundreds of species to extinction and kill countless billions every year.

      I certainly enjoy the gains our species have made, just like everyone else on HN. I'd rather be in our position than any other species on our planet. But given our history I'm also pretty terrified of what happens if and when we run into a smarter and more powerful alien species or AI. If history is any guide, it won't go well for us.

      This understanding can guide practical decisions. We shouldn't be barreling towards a potential super intelligence with no safeguards given how catastrophic that outcome could be, just like we shouldn't be shooting messages into space trying to wave our arms at whatever might be out there, any more than a small creature in the forest would want to attract the attention of a strange human.

      • Flamingoat 18 minutes ago

        Sorry but we weren't talking about Animal Welfare arguments. It was clearly in the scope of how people treat each other. Philosophical discussions similar to Janist/Vegan style argument are like well outside of the scope that being discussed.

        As for hunting. I don't see anything wrong with hunting. I don't see anything wrong with eating meat.

        As someone that has lived the vast majority of their life in the countryside, I also have little time for animal welfare arguments of the sort you are making.

        > But given our history I'm also pretty terrified of what happens if and when we run into a smarter and more powerful alien species or AI. If history is any guide, it won't go well for us.

        This is all sci-fiction nonsense. If we had any sort of aliens contact they wouldn't be many of them, or it would most likely be a probe like we send out probes to other planets. As for the super intelligence, the AI has an off switch.

    • cdirkx 6 hours ago

      The problem is that technology exponentially increases the negative effects of bad actors. The worst a sociopath could do in the stone age was ruin his local community; while today there are many more dystopian alternatives.

      • Flamingoat 6 hours ago

        I don't think that is true either. There have been despots throughout all of human history that have killed huge amounts of people with technology that is considered primitive now.

        Whereas much of the technology we have today has a massive positive benefit. Simply access to information today is amazing, I have learned how to fix my own vehicles, bicycles and do house repairs from simply YouTube.

        As I said being cynical is being intellectually lazy because it allows you to focus on the negatives and dismiss the positives.

  • adornKey 5 hours ago

    I don't think it's that accurate. Evil people are rare - and lazy people usually don't cause problems. The most real damage comes from human stupidity - from the mass of people that just want to help and do something good. Stupid People blindly believe anything they're told. And they do a lot of really bad things not because they're evil and lazy, but because they want to help achieve even the most stupid goal. Usually even nasty propagandists leaders aren't that evil - often they're just an intellectual failure - or have some mental issues. Themselves they don't do much practical evil - the mob of nice stupid people does the dirty work, because they just want to help.

  • wmeredith 7 hours ago

    I really liked this article, but it is pessimistic. Unfortunately that seems to be the culture-du-jour. Anger and fear drive engagement effectively, as it always has. If it bleeds, it Leeds has been a thing in news organizations since at least the 70s.

    If we ignore the headlines peddled by those who stand to benefit the most from inflaming and inciting, we live in a miraculous modern age largely devoid of much of the suffering previous generations were forced to endure. Make no mistake there are problems, but they are growing exponentially fewer by the day.

    An alternate take: humans will do what they’ve always tried to do—build, empower, engineer, cure, optimize, create, or just collaborate with other humans for the benefit of their immediate community—but now with new abilities that we couldn’t have dreamed of.

    • pixl97 6 hours ago

      I mean, any article that doesn't include both is incomplete.

      >If it bleeds, it Leeds has been a thing in news organizations since at least the 70s.

      The term yellow journalism is far older.

  • mannanj 5 hours ago

    this is accurate because of the few who do it. however the cautionary and hopeful tale behind it is the majority when they stand up against it can change the distribution of power. today however we're comfortable and soft and too scared to do - so posts like this remind us to gain some courage to stand up for change.

jvanderbot 7 hours ago

About as helpful as "Guns don't kill people ... "

And equally rebutted by Eddie Izzards "Well, I think the gun helps".

  • seniortaco 6 hours ago

    My thought as well. Nuclear weapons are also horrifying.

    And with LLMs, it's difficult to prevent the proliferation to bad actors.

    It seems like we're racing towards a world of fakery where nothing can be believed, even when wielded by good actors. I really hope LLMs can actually add value at a significant level.

    • rootusrootus 6 hours ago

      > It seems like we're racing towards a world of fakery where nothing can be believed

      Spend a couple minutes on social media and it is clear we are already there. The fakes are getting better, and even real videos are routinely called out as fake.

      The best that I can hope for is that we all gain a healthy dose of skepticism and appreciate that everything we see could be fake. I don't love the idea of having to distrust everything I see, but at this point it seems like the least bad option.

      But I worry that what we will experience will actually be somewhat worse. A sufficiently large number of people, even knowing about AI fakery, will still uncritically believe what they read and see.

      Maybe I am being too cynical this morning. But it is hard to look at the state of our society today and not feel a little bleak.

  • fruitworks 7 hours ago

    All of the solutions to AI "safety" are analagous to gun control: it is a centralization of power.

    • pixl97 6 hours ago

      That is assuming that ASI doesn't centralize power itself. I mean, if you are a non-human part of the animal kingdom you'd probably say that humans have centralized power around themselves.

      • fruitworks 6 hours ago

        I don't assert that there is a political solution to AI. It's possible that both avenues result in a total centralization of power.

    • Cthulhu_ 7 hours ago

      Are you claiming a lack of gun / AI control is democratizing? That's not working for (the lack of) gun control in the US at the moment though.

      Compare also with capitalism; unchecked capitalism on paper causes healthy competition, but in practice it means concentration of power (monopolies) at the expense of individuals (e.g. our accumulated expressions on the internet being used for training materials).

      • fruitworks 6 hours ago

        >Are you claiming a lack of gun / AI control is democratizing?

        This is obviously the case. It results in a greater distribution of power.

        >That's not working for (the lack of) gun control in the US at the moment though.

        In the US, one political party is pro gun-control and the other is against. The party with the guns gets to break into the capitol, and the party without the guns gets to watch. I expect the local problem of AI safety, like gun safety will also be self-solving in this manner.

        Eventually, Gun control will not work anywhere, regardless of regulation. The last time I checked, you don't need a drone license. And what are the new weapons of war? Not guns. The technology will increase in acessibility until the regulation is impossible to enforce.

        The idea that you can control the use of technology by limiting it to some ordained group of is very brittle. It is better to rely on a balance of powers. The only way to secure civilization in the long run is to make the defensive technology stronger than the offensive technology.

        • dctoedt 6 hours ago

          >> Are you claiming a lack of gun / AI control is democratizing?

          > This is obviously the case. It results in a greater distribution of power.

          That's the theory. In practice, it doesn't work.

          Most people don't spend a lot of time looking for ways to acquire and/or retain wealth and power. But absent regulation, we'll gradually lose out to those driven folks who do. Perhaps they do so because they want to serve humanity and they imagine that their gifts make them the logical choice to run things. Or perhaps they just want to dominate things.

          And the rest of us have every right to insist on guardrails, so those driven folks can't take us over the cliff. Certainly those folks can make huge contributions to society. But they can also fuck up spectacularly — because talent in one field isn't necessarily transferable to another. (Recall that Michael Jordan was one of the greatest basketball players of all time. But he wasn't even close to being the GOAT ... as a baseball player.)

          Sure, maybe through some combination of genetics, rearing, and/or just plain hard work, you've managed to acquire "a very particular set of skills" (to coin a phrase ...) for making money, or for persuading people to do what you want. That doesn't mean you necessarily know WTF you're talking about when it comes to the myriad details of running the most-complex "organism" ever seen on the planet, namely human society.

          And in any case, the rest of us are entitled to refuse to roll the dice on either the wisdom or the benevolence of the driven folks.

      • rootusrootus 6 hours ago

        > Compare also with capitalism; unchecked capitalism on paper causes healthy competition

        Is that not conflating capitalism with free markets? I have way more confidence in the latter than the former.

woeirua 6 hours ago

There's this weird disconnect in tech circles, where everyone is deathly afraid of AGI, but totally asleep on the very real possibility of thermonuclear war breaking out in Europe or Asia over the next 10 years. There's already credible evidence that we came perilously close to the use of tactical nuclear weapons in Ukraine which likely would've spiraled out of control. AGI might happen, but the threat of nuclear war keeps me up at night.

  • j2kun 5 hours ago

    > everyone is deathly afraid of AGI

    I think this is a vast overstatement. A small group of influential people are deathly afraid of AGI, or at least using that as a pretext to raise funding.

    But I agree that there are so many more things we should be deathly afraid of. Climate change tops my personal list as the biggest existential threat to humanity.

    • johnnyanmac an hour ago

      I sure wish my conspiracy theories could lead to me running billion dollar projects to defend against my shadow demons. Instead I just get ratio'd by the internet and get awkward silences at family gatherings.

      I think the sad part is that most people in power aren't planning to be around in 10 years, so they don't care about any long term issues that are cropping up. leave it to their grandchildren to burn with the world.

  • nonethewiser 5 hours ago

    >There's already credible evidence that we came perilously close to the use of tactical nuclear weapons in Ukraine which likely would've spiraled out of control.

    I do agree nukes are a far more realistic threat. So this is kind of an aside and doesn't really undermine your point.

    But I actually think we widely misunderstand the dynamic of using nuclear weapons. Nukes haven't been used for a long time and everyone kind of assumes using them will inevitably lead to escalation which spirals into total destruction.

    But how would Russia using a tactical nuke in Ukraine spiral out of control? It actually seems very likely that it would not be met in kind. Which is absolutely terrifying in it's own right. A sort of normalization of nuclear weapons.

    • aradox66 4 hours ago

      It's not an assumption, it's an extremely developed international field of tactical and strategic study that leads to these conclusions

      • nonethewiser 3 hours ago

        Extremely developed thought experiment maybe. Only 2 nuclear weapons have ever been dropped. Which is why I say it's a massive assumption.

        You tell me. How does this escalate into a total destruction scenario? Russia uses a small nuke on a military target in the middle of nowhere Ukraine. ___________________________. Everyone is firing nukes at eachother.

        Fill in the blank.

        We are not talking about the scenario where Russia fire a nuke at Washington, Colorado, CA, Montana, forward deployments, etc. and the US responds in kind while nukes are en-route.

        • johnnyanmac an hour ago

          >How does this escalate into a total destruction scenario?

          My favorite historical documentary: https://www.youtube.com/watch?v=Pk-kbjw0Y8U (my new favorite part is America realizing "fuck, we're dumasses" far too late into the warfare they started).

          That is to say: you're assuming a lot of good faith in a time of unrest with several leaders looking for any excuse enact martial law. For all we know, the blank is "Trump overreacts and authorizes a nuclear strike on Los Angeles"(note the word "authorizes". Despite the media, the president cannot unilaterally fire a nuclear warhead). That bizarre threat alone might escalate completely unrelated events and boom. Chaos.

          • nonethewiser an hour ago

            I think this perfectly demonstrates my point that the path from isolated tactical nuke to wide scale nuclear war is quite unclear and by no means necessary. Thank you.

            • johnnyanmac an hour ago

              I wish it was a clear path. That's the scariest part. Remember that one assassation escalated to The Great War.

              It'll be a similar flimsy straw breaking that will mark the start of nuclear conflict after years of rising tensions. And by then pandora's box will be opened.

  • myrmidon 6 hours ago

    I personally think that AI is a realistic extinction threat for our whole species within this century, and a full nuclear war never was (and probably never will be). Neither is climate change.

    Collapse of our current civilizations? Sure. Extinction? No.

    And I honestly see stronger incentives on a road towards us being outcompeted by AI, then on our leaders starting a nuclear war.

    • johnnyanmac an hour ago

      Total extinction of any dominant species is really hard. Very few post-apocalyptic settings suggest a full extinction and usually show some thousands of survivors struggling with the new norm. Humans in particular are very adaptable so thorough killing all 8 billion of us would be difficult no matter the scenario. I think only the Sun can do that and that's assuming we fail to find an exit strategy 5 billion years in (we're less than a thousandth of a percent into humanity if we measure on that scale).

      As such, I'd say "extinction" is more of a colloquial use of "Massive point in history that kills off billions in short order".

    • lm28469 2 hours ago

      Personally I don't bieleve in a collapse nor extinction, just a slow spiral into more and more enshittification. You'll have to talk to an "ai" doctor because real doctors will treat people with money, you'll have to face an "ai" administration because the real administration will work for people with money, you'll have to be a flesh and blood robot to an "ai" telling you what to do (already the case for Amazon warehouse workers, food delivery people, &c.), some "ai" will determine if you qualify for X or Y benefits, X or Y treatment, X or Y job.

      Basically everything wrong with today's productivism, but 100 times worse and powered by a shitty ai that's very far from agi.

  • shadowpho 3 hours ago

    >There's already credible evidence that we came perilously close to the use of tactical nuclear weapons in Ukraine which likely would've spiraled out of control

    Do you want to share any of this credible evidence?

  • hearsathought 4 hours ago

    > There's already credible evidence that we came perilously close to the use of tactical nuclear weapons in Ukraine which likely would've spiraled out of control.

    Spiral out of control in what way? Wouldn't it have ended the war immediately.

  • squidbeak 6 hours ago

    Other doomsday risks aren't any reason to turn our heads away from this one. AI's much more likely to end up taking an apocalyptic form if we sleep on it.

    • teucris 6 hours ago

      But this isn’t a suggestion to turn away from AI threats - it’s a matter of prioritization. There are more imminent threats that we know can turn apocalyptic that swaths of people in power are completely ignoring and instead fretting over AI.

    • woeirua 4 hours ago

      We should worry more about doomsday risks that are concrete and present today. Despite the prognostications of the uber wealthy, the emergence of AGI is not guaranteed. It likely will happen at some point, but is that tomorrow or 200 years in the future? We can’t know for sure.

  • Stevvo 4 hours ago

    There is no evidence that use of tactical nuclear weapons in Ukraine would spiral out of control. I like to think that the US/UK/France would stay out of it, if only because the leaders value their own lives if not those of others.

  • zeroonetwothree 5 hours ago

    Everyone thinks their own field is the most important and deserving of attention and funding. Big surprise.

  • marssaxman 6 hours ago

    Or, you know, the bit where we've now-irrevocably committed ourselves to destabilizing the global climate system whose relative predictability has been the foundation of our entire civilization. That's going to be a ride.

    • bee_rider 6 hours ago

      We just need to convince people that the market is an artificial superintelligence. Or maybe… subintelligence.

      • marssaxman 4 hours ago

        Years ago a friend of mine observed that we don't need to wonder what it would look like if artificial entities were to gain power and take over our civilization, because it already happened: we call them "corporations".

  • jjtheblunt 5 hours ago

    > credible evidence that we came perilously close to the use of tactical nuclear weapons in Ukraine

    I've not seen that. can you link to it?

  • proto-n 4 hours ago

    Well, one of these is something that most reaonable people work on avoiding, while the other is something that a huge capitalist industrial machine is working to achieve like their existence depends on it.

  • nradov 5 hours ago

    Well it's not everyone. I guess I am in "tech circles" and have zero fear of AGI. Everyone who is (or claims to be) "deathly afraid" is either ignorant or unserious or a grifter. Their arguments are essentially a form of secular religion lacking any firm scientific basis. These are not people worth listening to.

raldi 5 hours ago

The author comes so close to getting it, with the paragraph about how if you drop a human into an environment, they inevitably take over as the deadliest and most powerful creature.

But the next step is to ask why; in the case of the Gruffalo it was obvious: fangs, claws, strength, size…

In the case of humans, it’s because we’re the most intelligent creature in the forest. And for the first time in our history, we’re about to not be.

  • lif 5 hours ago

    yes, and:

    ruthlessness + strength + WMD =/= intelligence

ImPleadThe5th 4 hours ago

I'm kind of shocked by this thread. I cant get over the hubris that we think concepts that were introduced to society at large by science fiction in and before the 19th century is _inevitable_ just because we made a really really good predictive text engine in the 21st century.

Just because a concept exists star trek episode does not guarantee technology moving in that direction. I understand art has an effect on reality, but how hard are we spinning our gears because some writer made something so compelling it lives in our collective psyche?

You can point to the communicator from star trek and I'll point to the reanimation of Frankenstein's monster.

  • myrmidon 3 hours ago

    How has the feasibility of AI anything to do with science fiction?

    Unless you believe in some kind of magic (soul/divine spark/etc.), it seems completely inevitable to conclude that human cognition can be replicated by a machine, at the extreme end simply by simulating the whole thing.

    I would argue that "language" was a defining characteristic of human intelligence (as opposed to "lesser animals") long before we even conceived of AI, and hitting language processing/understanding benchmarks that far exceed animal capabilities is a very strong indicator by itself.

Insanity 6 hours ago

So “guns don’t kill people, people with guns kill people”.

But for AI I’m not sure that preposition will hold indefinitely. Although I do think we are a far away from having actual AGI that would pose this threat.

Still, the author has a good but obvious point.

  • supermatt 6 hours ago

    > guns don’t kill people

    * SIG P320 enters the chat *

Timsky 5 hours ago

> We don’t need to worry about AI itself, we need to be concerned about what “humans + AI” will do. Humans will do what they’ve always tried to do—gain power, enslave, kill, control, exploit, cheat, or just be lazy and avoid the hard work—but now with new abilities that we couldn’t have dreamed of.

Starting with the AI itself: LLMs sold as AI are the greatest mislead. Text generation using Markov chains is not particularly intelligent, even when it is looped back through itself a thousand times and appears alike an intelligent conversation. What is actually being sold is an enormous matrix trained on terabytes of human-written, high-quality texts, obviously in violation of all imaginable copyright laws.

Here is a gedanken experiment to test if an AI has any intelligence: until the machine starts to determine and resolve contradictions in its own outputs w/o human help, one can sleep tight. Human language is a fuzzy thing that is not quite suitable for a non-contradictory description of the world. Building such a machine would require resolving all the contradictions humanity has ever faced in a unified way. Before it happens, humanity will be drowned in low-quality generated LLM output.

bilater 5 hours ago

I sort of agree, but I hate the premise of this article because it sneakily focuses only on the potential harm of human + AI collaboration, without acknowledging the good.

That said, I agree that human + AI can cause damage and it’s precisely why, from a game theory perspective, the right move is to go full steam ahead. Regulation only slows down the good actors.

Valhalla is within reach, but we have to leap across a massive chasm to get there. Perhaps the only way out of this is through: accelerating fast enough to mitigate the “mid-curve” disasters, such as population revolt due to mass inequality or cyberattacks caused by vulnerabilities in untested systems.

petsfed 7 hours ago

A nit: There's a subtle distinction between an individual human and the power of human organization and civilization that is implied by the article, but never outright stated.

One-for-one, there are many creatures that are individually more dangerous to humans, and a decent number of people are killed by such animals every year. Indeed, a naked human in the wild is going to be quite fragile and easy to kill until they can bring some technology to bear. But there are no animals or even set of animals that could conceivably wipe out all of humanity at any of our technological peaks from the last 100,000 years. Even the number one killer of humans, the mosquito, is gradually being defeated, going from a vector for disease to just an annoyance, just like the flea.

  • pixl97 6 hours ago

    A lot of rugged individualists seem to neglect talking about the human super organism. That the vast majority of our strength comes from sharing knowledge, making tools, and working together.

    And on that same note it should be mentioned that exchange of information between humans is relatively slow and guarded. A group of entities that could exchange knowledge quickly and efficiently would represent an extreme challenge for us.

    • hn_acc1 6 hours ago

      They also ignore that a lot of the knowledge they now have / have studied comes FROM others sharing freely.

kazinator 5 hours ago

> Just like a hammer, sword, or a rifle lying on a ground is nothing to be feared, so too is AI.

AI is not just going to lie on the ground until someone picks it up to do harm.

An intelligent rifle with legs is something to be feared.

You cannot compare AI to inanimate objects under human control which have no agency. Especially not if you are bringing the imaginary AGI into the conversation.

The idea that AGI is just a hammer is absurd.

whycombinetor 7 hours ago

"We (humans) are the scariest animal in the woods. We’re the scariest animal anywhere. At any time, in any location, under any circumstances, if there’s a human present then that’s the scariest motherfucker in the woods."

Has this guy not heard of the Bengal tigers of the Sundarban forests, which kill ~50 humans per year? https://en.wikipedia.org/wiki/Tiger_attacks_in_the_Sundarban...

  • bob1029 5 hours ago

    A desperately hungry tiger might go for a human, but on average they are not interested in that fight. They would much prefer going for something like deer. The calories per unit of risk are much higher with non-human food sources. Wildlife tends to be very efficient at this economy.

    Anything smaller than a human is absolutely terrified of us. I used to be afraid of things like snakes and spiders, but they want nothing to do with humans. They will get the hell out of your way when you are walking through the woods. You have to do something really stupid to get bitten. Keeping animals as pets is where most of the trouble happens.

    • whycombinetor 5 hours ago

      Okay, if you want an aggressive animal that is interested in the fight, we can pick hippos.

  • graemep 7 hours ago

    > Has this guy not heard of the Bengal tigers of the Sundarban forests, which kill ~50 humans per year?

    How many tigers have been killed by humans? How many would be killed if humans did not restrain each other from killing tigers because we have killed so many that they are endangered. That population could be entirely wiped out by humans in the area.

    Mosquitoes kill far more people than tigers. So do venomous snakes. The fact is that when a human faces either of these they are far more likely to end up dead than the human.

    • whycombinetor 6 hours ago

      This isn't about what humans have done at other times or in other places. The author wrote "At any time, in any location, under any circumstances", so the statement can be proven false with one single counterexample - e.g. a lone human without a firearm encountering a tiger in the Indian Sundarbans.

      • graemep 6 hours ago

        I interpreted that as hyperbole.

  • WJW 7 hours ago

    I think he heard of all the other forests, where tigers are now extinct because humans are around.

keeda 2 hours ago

Something the article glosses over is that we are the scariest monster in the woods because we are intelligent. We can "survive, adapt, control, kill, or wipe out" anything or anybody else because our intelligence lets us outsmart nature and other people.

And now we've created some form of intelligence that none of us really understands.

Whether it is "real" intelligence or a "stochastic parrot" does not matter if it shows similar capabilities as us. Worse, it's similar but different in ways we cannot explain! I mean, if it outperforms most humans on advanced tasks but then makes elementary mistakes that a 4-year old won't, isn't that weird?

"Weird" can be good or bad, and I usually like "weird"... but now we're rapidly giving it tools to affect the real world and exponentially expanding the scale at which it can operate. We don't know what weird compounded at that scale and capability extrapolates to. Whether it is a SkyNet or a Bond supervillain or an Asimov scenario, the potential risks are considerable and unpredictable.

It's fair to be concerned about another monster in the woods even if we created it ourselves, because we've equipped it with the same powers we possess without understanding how they work.

d_silin 7 hours ago

That's why I am never afraid to walk alone at night, in the dark forest.

abbycurtis33 6 hours ago

What I'm scared by now is non-sentient intelligence, which I never thought would be this capable and still obviously dumb, which really makes it dangerous.

  • slow_typist 6 hours ago

    Is sentient intelligence less scary than the non-sentient variant? Or is non-sentient intelligence already scary enough?

    • abbycurtis33 3 hours ago

      I hope we can reason with sentient intelligence.

JonathanRaines 5 hours ago

I agree with what you're worried about - humans using AI to do bad things. However, have you considered also being worried about what AI can do? (can you ever have too much existential dread?) You cast it as a tool, it may be the first thing we've made that goes beyond that.

nyeah 6 hours ago

This is pleasant to read, but it's missing a logical step. We are great at dealing with the stuff we evolved to deal with. Sure.

But anything outside that evolutionary process? Who knows? For example we don't do so good on top of Mt Everest. We seem to have totally unsolved problems with refined opiates. Even computer games can be a trap.

Agentic AI? Who knows?

armada651 7 hours ago

> Just like a hammer, sword, or a rifle lying on a ground is nothing to be feared, so too is AI. It’s just an inanimate object; a tool, potentially.

A bomb is an inanimate object too, but if you find some unexploded ordnance lying on the ground you should fear it.

  • robertlagrant 6 hours ago

    An unexploded bomb definitely has the potential to be highly animate. It's just not currently animate.

    • alt227 5 hours ago

      As does AI.

catapart 5 hours ago

I can't figure out why people who write articles like this seem to be oblivious to the very practical argument about the US second amendment that goes along the lines of "guns don't kill people, people do".

To everyone who wants to pose an argument like this, please consider: We have already internalized that very simple, and very thoughtless argument, and have moved past that facile framing to start asking the more pressing questions like "what are we going to do about how humans are using X to perpetrate Y?"

We don't need the extra step of you making us reframe the question to fit your exact criteria of "good question". "AI is going to kill us" can be understood as: "Humans are going to use AI to kill us". The fact that you want to misunderstand it as a more childish argument is your own failings with semiotics. It's so blatantly silly that in most cases, it's less degrading to assume that you are purposefully trying to misdirect the argument than to assume you actually believe people are worried about the agency of inanimate objects. But then you go and write a whole article to preen "logical" at us about how it's just that "no one's asking the right questions!", which kind of makes it hard to imagine that you don't actually think so little of people.

owenfi 4 hours ago

I like this fairytale as an analogy for AGI, but this article misses the point.

We are the mouse.

> I don’t really believe in the threat of AGI Just like the mouse doesn't believe in the Gruffalo (until it shows up).

The mouse goes through the woods scaring the hypothetically-more-dangerous creatures with its stories (us, using our intellect, weapons, destroying habitats) until the real Gruffalo shows up.

For a bit, the mouse "uses" the new tool to scare the animals even more (as alluded, human with tool, scarier than without).

Eventually the mouse scares the Gruffalo away (analogous to the brief window when we think we have AGI under control).

The next (unwritten) chapter probably doesn't look so good for the mouse (when the gruffalo grows to enormous size, eats him and all the other animals in the woods on a sandwich, and sucks up the rest of the resources on the planet.)

HellDunkel 2 hours ago

I am increasingly less willing to take this stuff seriously. A catchy headline, that’s about it.

lifeisstillgood 5 hours ago

Yes - a simple and necessary reminder of what goes on the priority list

psychoslave 6 hours ago

All monsters were born equal, but some are more scarier than other.

buu700 5 hours ago

I see the point the author is making, but AI doesn't need to be AGI or malicious to cause mass destruction. A poorly implemented deployment of AI in the right circumstances with too much access and insufficient guardrails could theoretically wind up LARPing as Skynet purely by chance.

This is why I've always considered current AI "safety" efforts to be totally wrongheaded. It's not a threat to humanity if someone has an AI generate hate speech, porn, misinformation, or political propaganda. AI is only a threat to humanity if we don't take security seriously as we roll out increasingly more AI-driven automation of the economy. It's already terrifying to me that people are relying on containers to sandbox yolo-mode coding agents, or even raw dogging them or their personal machines.

kkukshtel 7 hours ago

(DISCO ELYSIUM SPOILERS BELOW)

One of the most poignant moments in Disco Elysium is near the very end when you encounter a very elusive crypid/mythic beast.

The moment is treated with a lot of care and consideration, and the conversation itself is, I think, transcendent and some of the best writing in games (or any media) ever.

The line that sticks with me most is when the cryptid says:

"The moral of our encounter is: I am a relatively median lifeform -- while it is you who are total, extreme madness. A volatile simian nervous system, ominously new to the planet. The pale, too, came with you. No one remembers it before you. The cnidarians do not, the radially symmetricals do not. There is an almost unanimous agreement between the birds and the plants that you are going to destroy us all."

It's easy to see this reflected in nature in the real world. All animals and life seem to be aware and accommodating of each other, but humans are cut out from that communication. Everything runs from us, we are not part of the conversation. We are the exclusion, the anomaly.

I think to realize this deeply inside of yourself is a big moment of growth, to see that we exist in a world that was around long before us and will be around long after us.

  • Quarrelsome 7 hours ago

    personally I'm a little concerned about meeting alien life. If the universe is incredibly diverse with many different types of lifeform then how might an arbitrary one of those view us?

    We proliferate incredibly quickly, we have limited care for our environments, but most importantly our primary means of sustinance is to consume other forms of life. To the point that we consider it an art form, we spend vast amounts of energy perfecting, discussing or advertising the means of cooking and eating the flesh of other life forms. To us its normal but surely to an alien who gains sustinance by some other means; we're absolutely terrifying. Akin to a devouring swarm.

  • sickofparadox 7 hours ago

    >It's easy to see this reflected in nature in the real world. All animals and life seem to be aware and accommodating of each other, but humans are cut out from that communication.

    This just seems like noble savaging birds and rabbits and deer. None of these creatures have any communication with each other, and while they may be more aware of each other's presence than a 'go hiking every once in a while' person, someone who actually spends a good amount of time in the woods, such as a hunter or birdwatcher, probably has a pretty good sense of them. The Disco Elysium quote just reads like fairly common environmentalist misanthropy, which I suppose isn't surprising considering the creators.

    • wat10000 6 hours ago

      I think people forget how big people are. We're well above average size in the typical natural environment, and especially in the typical sorta-natural-but-sorta-urban environment most of us are in most of the time.

      The local rabbits and squirrels tolerate each other but are pretty scared of me. Of course they are, I'm two hundred times bigger than they are, and much more dangerous. The local foxes are the closest thing we have to an apex predator around here, and they're rightfully terrified of this massive creature that outweighs their entire family combined.

      Imagine wandering through the woods, enjoying the birds tweeting and generally being at one with nature, and then you come across a 20-ton 35ft-tall monster. You'd run away screaming.

dwnw 7 hours ago

"to the monsters we're the the monsters" station eleven

jmull 6 hours ago

So silly.

Of course humans are (by far) the source of the biggest problems humans face.

The argument here is that since humans are the scariest, we should ignore problems AI might cause.

Pure nonsense, since (1) AI is human-created — so just another piece of what makes us scary; (2) this kind of binary thinking makes no sense anyway — as if there can’t be multiple things to be concerned with.

> Anyone trying to tell you otherwise is trying to distract you.

A particularly poor bit of argumentation. Barely above sticking your fingers in your ears and shouting, “Blah, blah, blah! I can’t hear you!”

What is is about AI that makes people lose their minds?

edit: seeing the engagement this is getting, maybe it’s a false flag operation of sorts? Making such bad anti-ai arguments that the argument for ai gets stronger? That would at least make some sense.

XiphiasX 6 hours ago

Thanks for the encouragement.

blamestross 7 hours ago

We have been using the mechanism of "Humans communicating and working together" to form superintelligences for a long time. They have gotten more effective, more efficient and more durable with time.

AI isn't the new monster. It's a new mitochondria being domesticated to supercharge the existing monsters.

  • forgetfulness 7 hours ago

    That reminds me of the “paper clip maximizer” thought experiment about rogue AI fulfilling a particular goal, without being held to any consideration of human welfare

    I thought it was a very novel idea at first, until I realized that this describes all manners of human groups, notably corporations and lobbying groups, they will turn the world into a stove, subvert democracy, (try to) disrupt the economic and social fiber of society, anything to eg maximize shareholder value

    • Nasrudith 6 hours ago

      And corporations are ultimately just sock-puppets for humanity. People forget that and like to other them while simultaneously turning them into the scapegoats. See every idiot with the list of companies responsible for global warming that amounts to 'every oil company' and ignore that it was the customers were the ones burning their products.

      It is to be expected really. Humans themselves hold little consideration to human welfare when fulfilling their goals. It is something ingrained by nature for survival and in no way limited to humans. Every drop of water you drink, every bite you eat, are ones which cannot go to the thirsty and the hungry. With few exceptions, only our children would even give us pause to forgo such things for the sake of others.

      Also bit of a pet peeve of mine: society isn't a delicate bolt of laced silk. It isn't a fabric, much less one that is damaged by any little change that you don't like. It isn't even stable which makes the charges of anything as 'ruining' society especially bizarre when nobody can point to where it would go before the blamed change. So hold off on poisoning Socrates.

      Even if we hold the current state as worthy of preservation we would ultimately fail to for reasons related to the central paradox of tradition. Why your forebearers first did something was not done because of obedience to tradition, so by trying to preserve it set in stone, you have already failed.

      • pixl97 6 hours ago

        I mean any 'moral' reasoning behind ruining society is probably wrong.

        This said society can be 'ruined' by everybody in that society dying either by outside influence or their own stupidity. So while I discount the moral ruination, the "oh god, oh god, we're all dying ones I'd like to avoid"

hdseggbj 6 hours ago

The distinction is irrelevant. No one thought the nukes would blow themselves up, but here we are on the precipice of destruction. We aren't going to bury ai in 20 story bunkers to protect llms.

Everyone knows the real evil are the people building the ai. We aren't afraid of agi we are afraid of Boston dynamics (aka Google). We are afraid of Bill gates, Sam altman, Jeff bezos, Larry Ellison and elon musk.

These are dangerous people. Their mind set. Their unstoppable lust for money and power. We are afraid of their ability to convince humans to do unspeakable things for this imaginary thing we call money.

With agi they don't have to persuade anyone with anything. There's not Even a human conscience to stop them. We know these folks have no conscience of their own. The world is still here because other people have them.

Agi won't have a conscience or the integrity to stop evil.

Human society is collapsing because Good people are doing nothing.

  • bryanlarsen 5 hours ago

    P.S. Boston Dynamics is no longer owned by Google, it's part of Hyundai now.

    • hdseggbj 5 hours ago

      Ah yes, rings a bell. Doesn't change the sentiment though. They're all crooks and cheaters who will sacrifice the greater good for their own self interest.

      https://en.wikipedia.org/wiki/2025_Georgia_Hyundai_plant_imm...

      It's the fear of death that's killing us all.

      • bryanlarsen 5 hours ago

        There are tons of examples of chaebols behaving evilly in Korea. You don't have to pull out a scenario where half of Americans and most non-Americans think Hyundai are the victims, not the bad guys.

renewiltord 4 hours ago

Humanity is the scariest, yes. But a human is not scary.

WhitneyLand 5 hours ago

For the love of God let’s ignore him being wrong about the impossibility of AGI, and take to heart the nightmarish and inevitable threats coming from how people will use it, that will almost certainly challenge us first.

blueflow 4 hours ago

def written by a clanker

superkuh 6 hours ago

Human beings are not the scariest monsters in the woods. By far the most dangerous, most malicious, and most powerful monsters are conglomerate entities: the legal corporate persons. They've been absolutely ravaging the earth and societies since they achieved legal personhood and rights without any of the downsides or legal liabilities. They're much scarrier than individual actual human persons. And while most people blame "AI" for the web's current troubles AI is only barely involved. The real culprits are the corporations. Again, same as it ever was.

We really need to get rid of corporate personhood. Or at least have a corporate death penalty.

snozolli 7 hours ago

At any time, in any location, under any circumstances, if there’s a human present then that’s the scariest motherfucker in the woods.

This guy has clearly never bumped into a grizzly bear momma, a moose in rut, a hippo, or loads of other animals.

Personally, I'm not even worried about AI itself, I'm worried about the people wielding it. MBAs are the scariest monsters in the woods.

  • Insanity 6 hours ago

    In a 1-to-1 situation you are right, a single human is not the scariest. But humans (mostly) dominated the animal kingdom, and with enough time we could probably eradicate every other species if we wanted to.

    I guess bacteria and viruses might have a stronger claim for being the 'scariest monster' though. I don't think there's a strong 'living' contender for wiping out humanity apart from those. (Viruses is a bit debatable on the 'living' part though).

ekimekim 7 hours ago

As always, there is a relevant XKCD: https://xkcd.com/1968/

  • Filligree 6 hours ago

    We can worry about both. There being additional dangers does not reduce the impact of the other dangers.

    • excalibur 6 hours ago

      True, but one of them is much more imminent than the other.

om8 5 hours ago

... Yet.