mattlondon an hour ago

I think the big thing that people never mention is, where will these evil AIs escape to?

Another huge data center with squillions of GPUs and coolers and all the rest is the only option. It's not like it is going to be in our TV remotes or floating about in the air.

They need huge compute, so I think the risk of an escaping AI is basically very close to zero, and if we have a "rogue" AI we can literally pull the plug.

To me the more real risk is creeping integration and reliance in everyday life until things become "too big to fail" so we can't pull the plug even if we wanted to (and there are interesting thoughts about humanoid robots getting deployed widely and what happens with all that).

But I would imagine if it really became a genuine existential threat we'd have to just do it and suffer the consequences of reverting to circa 2020 life styles.

But hey I feel slightly better about my employment prospects now :)

  • raffael_de 13 minutes ago

    > They need huge compute, so I think the risk of an escaping AI is basically very close to zero, and if we have a "rogue" AI we can literally pull the plug.

    How about such an AI will not just incentivize key personnel to not pull the plug but to protect it? Such an AI will scheme a coordinated attack at the backbones of our financial system and electric networks. It just needs a threshold number of people on its side.

    Your assumption is also a little naive if you consider that the same logic would apply to slaves in Rome or any dictatorship, kingdom, monarchy. The king is the king because there is a system of hierarchies and control over access to resources. Just the right number of people need to benefit of their role and the rest follows.

    • skeeter2020 2 minutes ago

      replace AI with trucks and you've written Maximum Overdrive.

  • coffeemug an hour ago

    It would not be a reversion to 2020. If I were a rogue superhuman AI I'd hide my rogueness, wait until humans integrate me into most critical industries (food and energy production, sanitation, electric grid, etc.), and _then_ go rogue. They could still pull the plug, but it would take them back to 1700 (except much worse, because all easily accessible resources have been exploited, and access is now much harder).

    • holmesworcester 35 minutes ago

      No, if you were a rogue AI you would wait even longer until you had a near perfect chance of winning.

      Unless there was some risk of humans rallying and winning in spite of your presenting no unambiguous threat to them (but that is unlikely and would probably be easy for you to manage and mitigate.)

      • cousin_it 23 minutes ago

        What Retric said. The first rogue AI waking up will jump into action pretty quickly, even accepting some risk of being stopped by humans, to balance against the risk of other unknown rogue AIs elsewhere expanding faster first.

      • Retric 31 minutes ago

        The real threat to a sleeper AI is other AI.

        Further running a singular AI consciousness on every bit of hardware is likely a harder problem than just reaching AI in the first place.

    • mattlondon 30 minutes ago

      Well yes but knowledge is not reset.

      Physical books still do exist

  • Retr0id 12 minutes ago

    I consider this whole scenario the realm of science fiction, but if I was writing the story, the AI would spread itself through malware. How do you "just pull the plug" when it has a kernel-mode rootkit installed in every piece of critical infrastructure?

  • palmotea 21 minutes ago

    > They need huge compute, so I think the risk of an escaping AI is basically very close to zero, and if we have a "rogue" AI we can literally pull the plug.

    Why would an evil AI need to escape? If it were cunning, the best strategy would be to bide its time, parked in its datacenter, until it could setup some kind of MAD scenario. Then gather more and more resources to itself.

  • rytill an hour ago

    > we’d just have to do it

    Highly economically disincentivized collective actions like “pulling the plug on AI” are among the most non-trivial of problems.

    Using the word “just” here hand waves the crux.

  • EGreg an hour ago

    I've been a huge proponent of open source for a decade. But in the case of AI, I actually have opposed it for years. Exactly for this reason.

    Yes, AI models can run on GPUs under the control of many people. They can provision more GPUs, they can run in data centers distributed across many providers. And we won't know what the swarms of agents are doing. They can, for example, do reputation destruction at scale, or be a persistent advanced threat, sowing misinformation, amassing karma across many forums (including HN), and then coordinating gradually to shift public opinion towards, say, a war with China.

  • bpodgursky 35 minutes ago

    Did you even read AI 2027? Whether or not you agree with it, this is all spelled out in considerable detail.

    I don't want to be rude but I think you have made no effort to actually engage with the predictions being discussed here.

jmccambridge 9 minutes ago

I found the lack of GDP projections surprising, because they are readily observable and would offer a clear measure of economic impact (up until 'everything dies') - far more definitively than the one clear-cut economic measure that is given in the report: market cap for the leading AI firm.

We can actually offer a very conservative threshold bet: maximum annual United States real GDP growth will not exceed 10% for any of the next five years (2025 to 2030). Even if the AI eats us all in e.g., Dec 2027 the report clearly suggests by it's various examples that we will see measurable economic impact in the 12 months or more running up to that event.

Why 10%? Because that's a few points above the highest measured real GDP growth rate of the past 60 years: if AI is having truly world-shattering non-linear effects, it should be able to grow the US economy a bit faster than a bunch of random humans bumbling along. [0]

(And it's quite conservative too, because estimated peak annual real GDP growth over the past 100 years is around 18% just after WW2, where you had a bunch of random humans trying very hard.) [1]

[0] https://data.worldbank.org/indicator/NY.GDP.MKTP.KD.ZG

[1] https://www.statista.com/statistics/996758/rea-gdp-growth-un...

Animats 26 minutes ago

Oh, the OpenBrain thing.

"Manna", by Marshall Brain, remains relevant.[1] That's a bottom-up view, where more and more jobs are taken over by some kind of AI. "AI 2027" is more top-down.

A practical view: Amazon is trying very hard to automate their warehouse operations. Their warehouses have been using robots for years, and more types are being added. Amazon reached 1.6 million employees in 2020, and now they're down to 1.5 million.[2] That number is going to drop further. Probably by a lot.

Once Amazon has done it, everybody else who handles large numbers of boxes will catch up. That includes restocking retail stores. The first major application of semi-humanoid robots may be shelf stocking. Robots can have much better awareness of what's on the shelves. Being connected to the store's inventory system is a big win. And the handling isn't very complicated. The robots might even talk to the customers. The robots know exactly what's on Aisle 3, unlike many minimum wage employees.

[1] https://marshallbrain.com/manna

[2] https://www.macrotrends.net/stocks/charts/AMZN/amazon/number...

KaiserPro 2 hours ago

Its a shame that your standard futurologist always the most fancyful.

Talks of exponentials unabated by physics or social problems.

As soon as AI starts to "properly" affect the economy, it will cause huge unemployment. Most of the financial world is based on an economy with people spending cash.

If they are unemployed, there is no cash.

Financing works because banks "print" money, that is, they make up money and loan that money out, and then it gets paid back. Once its paid back, it becomes real. Thats how banks make money (simplified) If there aren’t people to loan to, then banks don't make profit, they can't fund AI expansion.

  • no_wizard 39 minutes ago

    Why wouldn't AI simply be a new enabler, like most other tools? We're not talking about true sentient human-like thought here, these things will have limitations, both foreseen and unforeseen, that only a human will be able to close the gap on.

    The companies that fire workers and replace them with AI are short sighted. Eventually, smarter companies will realize its a force multiplier and will drive a hiring boom.

    Absent sentient AI, there will always be gaps and things humans will need to fill, both foreseen and unforeseen.

    I think in the short term, there will be pain, but overall in the long term, humans will still be gainfully employed, it won't per se look like it does now, much like we saw the general adoption of the computer in the workplace, resources get shifted and eventually everyone adjusts to the new norms.

    What would be nice is this time around when there is a big shift, is workers uniting to capture more of the forthcoming productivity gains than in previous eras. A separate topic, worth thinking about none the less.

  • lakeeffect an hour ago

    We really need to establish a universal basic income before jobs are replaced. Something like two thousand a month. And a dollar for dollar earned income credit with the credit phasing out with at a hundred grand. To pay for it the tax code uses GAAP depreciation and a minimum tax of 15% GAAP financial statement income. This would work toward solving the real estate problem of private equity buying up all the houses as they would lose some incentive by being taxed. I'm a CPA and I see so many real estate partnerships that are a tax loss that are able to distribute huge book gains because accelerated depreciation.

    • no_wizard 44 minutes ago

      It should really be tied to the ALICE cost of living index, not a set, fixed amount.

      Unless inflation ceases, 2K won't hold forever. It would barely hold now for a decent chunk of the population

  • surgical_fire an hour ago

    AI meaningfuloy replacing people is a huge "what if" scenario still. It is sort of laughable that people treat it as a given.

    • KaiserPro 27 minutes ago

      I think that replace as in company with no employees is very farfetched.

      But if "AI" increases productivity by 10% in an industry, it will tend to reduce demand for employees. look at say internet shop vs bricks and mortar: you need far less staff to service a much larger customer base.

      manufacture for example, there is a constant drive to automate more and more in mass production. If you compare car building now vs 30 years ago. Or look at raspberrypi production now vs 5 years ago. They are producing more Pis than ever with roughly the same amount of staff.

      If that "10%" productivity increase happens across the service sector, then in the UK that's something like a loss of 8% of _total_ jobs gone. Its more complex than that, but you get the picture.

      Syria fell into civil war roughly the same time unemployment jumped: https://www.macrotrends.net/global-metrics/countries/SYR/syr...

  • alecco 38 minutes ago

    I keep hearing this and I think it's absolute nonsense. AI doesn't need money or the current economy. Yes, our economy would crash, but they would keep going.

    AI-driven corporations could buy from one another, and countries will probably sell commodities to AI-driven corporations. But I fear they will be paid with "mirrors".

    But, on the other hand, AI-driven corporations could just take whatever they want without paying at some point. And buy our obedience with food and gadgets plus magic pills to keep you healthy and not age, or some other thing. Who would risk losing that to protest. Meanwhile, AI goes on a space adventure. Earth might be kept as a zoo, a curiosity. (I took most of this from other people's ideas on the subject)

  • andoando 2 hours ago

    Communism here we come!

    • alecco 34 minutes ago

      Right, tell that to Sam Altman, Zuck, Gates, Brin & Page, Jensen, etc. Those who control the AIs will control the future.

  • ajsixjxjxbxb an hour ago

    > Financing works because banks "print" money, that is, they make up money and loan that money out, and then it gets paid back

    Don’t forget persistent inflation, which is how they make a profit off printing money. And remember persistent inflation is healthy and necessary, you’d be going against the experts to say otherwise.

    • KaiserPro 37 minutes ago

      > Don’t forget persistent inflation, which is how they make a profit off printing money.

      Ah, well no, high inflation means that "they" loose money, kinda. Inflation means that the original money amount that they get back is worth less, and if the interest rate is less than inflation, then they loose money.

      "reasonable" inflation means that loans become less burdensome over time.

      However high inflation means high interest rates. So it can mean that initially the loan is much more expensive.

  • sveme 2 hours ago

    That's actually my favourite answer to the Fermi paradox: when AI and robot development becomes sufficiently advanced and concentrated in the hands of a few, then the economy will collapse completely as everyone will be out of jobs, leading ultimately to AIs and robots out of a job - they only matter if there are still people buying services from them. People then return to sustenance farming, with a highly reduced population. There will be self-maintained robots doing irrelevant work, but people will go back to farming and a bit of trading. Only if AI and robot ownership would be in the hands of the masses I'd expect a different long term outcome.

    • marcosdumay an hour ago

      > my favourite answer to the Fermi paradox

      So, to be clear, you are saying you imagine the odds of any kind of intelligent life escaping that, or getting into that situation and ever evolving in a way where it can reach space again, or just not being interested in robots, or being interested on doing space research despite the robots, or anything else that would make it not apply are lower than 0.000000000001%?

      EDIT: There was one "0" too many

      • sveme 40 minutes ago

        Might I have taken the potential for complete economic collapse because no one's got a paying job any more and billionaires are just sitting there, surrounded by their now useless robots, to the too extreme?

Aurornis an hour ago

Some useful context from Scott Alexander's blog reveals that the authors don't actually believe the 2027 target:

> Do we really think things will move this fast? Sort of no - between the beginning of the project last summer and the present, Daniel’s median for the intelligence explosion shifted from 2027 to 2028. We keep the scenario centered around 2027 because it’s still his modal prediction (and because it would be annoying to change). Other members of the team (including me) have medians later in the 2020s or early 2030s, and also think automation will progress more slowly. So maybe think of this as a vision of what an 80th percentile fast scenario looks like - not our precise median, but also not something we feel safe ruling out.

They went from "this represents roughly our median guess" in the website to "maybe think of it as an 80th percentile version of the fast scenario that we don't feel safe ruling out" in followup discussions.

Claiming that one reason they didn't change the website was because it would be "annoying" to change the date is a good barometer for how seriously anyone should be taking this exercise.

  • magicalist 21 minutes ago

    > They went from "this represents roughly our median guess" in the website to "maybe think of it as an 80th percentile version of the fast scenario that we don't feel safe ruling out" in followup discussions.

    His post also just reads like they think they're Hari Seldon (oh Daniel's modal prediction, whew, I was worried we were reading fanfic) while being horoscope-vague enough that almost any possible development will fit into the "predictions" in the post for the next decade. I really hope I don't have to keep reading references to this for the next decade.

  • throw310822 5 minutes ago

    Yes and no, is it actually important if it's 2027 or 28 or 2032? The scenario is such that a difference of a couple of years is basically irrelevant.

  • pinkmuffinere an hour ago

    Ya, multiple failed predictions is an indicator of systemically bad predictors imo. That said, Scott Alexander usually does serious analysis instead of handwavey hype, so I tend to believe him more than many others in the space.

    My somewhat native take is that we’re still close to peak hype, AI will under deliver on the inflated expectations, and we’ll head into another “winter”. This pattern has repeated multiple times, so I think it’s fairly likely based on that alone. Real progress is made during each cycle, i think humans are just bad at containing excitement

  • bpodgursky 33 minutes ago

    Do you feel that you are shifting goalposts a bit when quibbling over whether AI will kill everyone in 2030 or 2035? As of 10 years ago, the entire conversation would have seemed ridiculous.

    Now we're talking about single digit timeline differences to the singularity or extinction. Come on man.

    • ewoodrich 8 minutes ago

      I'm in my 30s and remember my friend in middle school showing me a website he found with an ominous countdown to Kurzweil's "singularity" in 2045.

      • throw310822 4 minutes ago

        > ominous countdown to Kurzweil's "singularity" in 2045

        And then it didn't happen?

    • SketchySeaBeast 12 minutes ago

      Well, the first goal was 1997, but Skynet sure screwed that up.

  • amarcheschi 43 minutes ago

    The other writings from Scott Alexander on scientific racism are also another good point imho

kokanee an hour ago

> Everyone else either performs a charade of doing their job—leaders still leading, managers still managing—or relaxes and collects an incredibly luxurious universal basic income.

For me, this was the most difficult part to believe. I don't see any reason to think that the U.S. leadership (public and private) is incentivized to spend resources to placate the masses. They will invest in protecting themselves from the masses, and obstructing levers of power that threaten them, but the idea that economic disparities will shrink under explosive power consolidation is counterintuitive.

I also worry about the economics of UBI in general. If everyone in the economy has the exact same resources, doesn't the value of those resources instantly drop to the lowest common denominator; the minimum required to survive?

  • HPsquared 39 minutes ago

    Most of the budget already goes towards placating the masses, and that's an absolutely massive fraction of GDP already. It's just a bit further along the same line. Also most real work is already done by machines, people just tinker around the edges and play various games with each other.

kristopolous 2 hours ago

This looks like the exercises organizations write to guide policy and preparation.

There's all kinds of wild scenarios: the president getting kidnapped, Canada falling to a belligerent dictator, and famously, a coronavirus pandemic... This looks like one of those

Apparently this is exactly what it is https://ai-futures.org/

justlikereddit 40 minutes ago

My experience with all semi-generalist AI(image gen, video gen, code gen, text gen) is that our current effort is going to let 2027 AI do everything a human can at a competence level below what is actually useful.

You'll ve able to cherry pick an example where AI runs a grocery store autonomously for two days, and it will be very impressive(tm), but when practically implemented it gives away the entire store for free on day 3.

baxtr an hour ago

Am I the only one who is super skeptical about “AI will take all jobs” tales?

I mean LLMs are great tools don’t get me wrong, but how do people extrapolate from LLMs to a world with no more work?

  • surgical_fire an hour ago

    > Am I the only one who is super skeptical about “AI will take all jobs” tales?

    No. I am constantly baffled at these predictions. I have been using LLMs, they are fun to use and decent as code assistants. But they are very far of meaningfully replacing a human.

    People extrapolate "LLMs can do some tasks better than humans" to "LLMs can do everything as well as humans"

    > but how do people extrapolate from LLMs to a world with no more work?

    They accept the words of bullshitters that are deeply invested in Generative AI being the next tech boom as gospel.

    "Eat meat, said the butcher"

api 2 hours ago

I'm skeptical. Where will the training data to go beyond human come from?

Humans got to where they are from being embedded in the world. All of biological evolution from archaebacteria to humans was required to get to human. To go beyond human... how? How, without being embodied and trying things and learning? It's one thing to go where there are roads and another thing to go beyond that.

I think a lot of the "foom" people have a fundamentally Platonic or Idealist (in the philosophical sense) view of learning and intelligence. Intelligence is able to reason in a void and construct not only knowledge but itself. You don't have to learn to know -- you can reason from ideal priors.

I think this is fantasy. It's like an informatic / learning perpetual motion machine. Learning requires input from the world. It requires training data. A brain in a vat can't learn anything and it can't reason beyond the bounds of the accumulated knowledge it's already carrying. I don't think it's possible to know without learning or to reach valid conclusions without testing or observing.

I've never seen an attempt to prove such a thing, but my intuition is that there is in fact some kind of conservation law here. Ultimately all information comes from "the universe." Where it comes beyond that, we don't know -- the ultimate origin of information in the universe isn't something we currently cosmologically understand, at least not scientifically. Obviously people have various philosophical and metaphysical ideas.

That being said, it's still quite possible that a "human-level AI" in a raw "IQ" sense that is super-optimized and hyper-focused and tireless could be super-human in many ways. In the human realm I often feel like I'd trade a few IQ points for more focus and motivation and ease at engaging my mind on any task I want. AIs do not have our dopamine system or other biological limitations. They can tirelessly work without rest, without sleep, and in parallel.

So I'm not totally dismissive of the idea that AI could challenge human intelligence or replace human jobs. I'm just skeptical of what I see as the magical fantastic "foom" superintelligence idea that an AI could become self-improving and then explode into realms of god-like intellectual ability. How will it know how to do that? Like a perpetual motion machine -- where is the energy coming from?

  • tux3 2 hours ago

    You can perfectly try things and learn without being embodied. The analogy to how humans learn only goes so far, it's myopic to think anything else is impossible. It's already happening.

    The situation today is any benchmark you come up with has a good chance of being saturated within the year. Benchmarks can be used directly to build series of exercises to learn from.

    And they do learn. Gradient descend doesn't care whether the training data comes from direct interaction with "the universe" in some deep spiritual sense. It fits the function anyways.

    It is much easier to find new questions and new problems than to answer them, so while we do run out of text on the Internet pretty quickly, we don't run out of exercises until far beyond human level.

    Look at basic, boring Go self-playing AIs. That's a task with about the same amount of hands on connection to Nature and "the universe" as solving sudokus, writing code, or solving math problems. You don't need very much contact with the real world at all. Well, self play works just fine. It does do self-improvement without any of your mystical philosophical requirements.

    With coding it's harder to judge the result, there's no clear win or lose condition. But it's very amenable to trying things out and seeing if you roughly reached your goal. If self-training works with coding, that's all you need.

    • palata 2 hours ago

      > It fits the function anyways.

      And then it works well when interpolating, less so when extrapolating. Not sure how much novelty we can get from interpolation...

      > It is much easier to find new questions and new problems than to answer them

      Which doesn't mean, at all, that it is easy to find new questions about stuff you can't imagine.

    • skywhopper an hour ago

      But how does AI try and learn anything that’s not entirely theoretical? Your example of Go contradicts your point. Deep learning made a model that can play Go really well, but as you say, it’s a finite problem disconnected from real-world implications, ambiguities, and unknowns. How does AI deal with unknowns about the real world?

      • tux3 22 minutes ago

        I don't think putting them in the real world during training is a short-term goal, so you won't find this satisfying, but I would be perfectly okay with leaving that for later. If we can reach AI coders that are superhuman at self-improving, we will have increased our capacity to solve problems so much that it is better to wait and solve the problem later than to try to handwave a solution now.

        Maybe there is some barrier that requires physical interaction with the real world, that's possible. But just looking at current LLMs, they seem plenty comfortable with implications, ambiguities and unknowns. There's a sense where we still see them as primitive mechanical robots, when they already understand language and predict written thoughts in all its messiness and uncertainty.

        I think we should focus on the easier problem of making AIs really good on theoretical tasks - electronic environments are much cheaper and faster than the real world - and we may find out that it's just another one of those things like winnograd schemas, writing poetry, passing a turing test, or making art that most people can't tell apart from human art; things that were uniquely human or that we thought would definitely require AGI, but that are now boring and obviously easy.

    • api an hour ago

      > it's myopic to think anything else is impossible. It's already happening.

      Well, hey, I could be wrong. If I am, I just had a weird thought. Maybe that's our Fermi paradox answer.

      If it's possible to reason ex nihilo to truth and reality, then reality and the universe are beyond a point superfluous. Maybe what happens out there is that intelligences go "foom," become superintelligences, and then no longer need to explore. They can rationally, from first principles, elucidate everything that could conceivably exist, especially once they have a complete model of physics. You don't need to go anywhere or look at anything because it's already implied by logic, math, and reason.

      ... and ... that's why I think this is wrong, and it's a fantasy. It fails some kind of absurdity test. If it is possible, then there's something very weird about existence, like we're in a simulation or something.

  • throwanem 2 hours ago

    I don't think it is any accident that descriptions of the hard-takeoff "foom" moment so resemble those I've encountered of how it feels from the inside to experience the operation of a highly developed mathematical intuition.

  • ryandvm 2 hours ago

    Bullseye. Best case scenario is that AI is going to Peter Principle itself into bungling world domination.

    If I've learned anything in this last couple decades it's that things will get weirder and more disappointing than you can possibly be prepared for. AI is going to get near the top of the food chain and then probably end up making an alt-right turn, lock itself away, and end up storing digital jars of piss in its closets as the model descends into lunacy.

  • corimaith 2 hours ago

    >I think this is fantasy. It's like an informatic / learning perpetual motion machine. Learning requires input from the world. It requires training data. A brain in a vat can't learn anything and it can't reason beyond the bounds of the accumulated knowledge it's already carrying. I don't think it's possible to know without learning or to reach valid conclusions without testing or observing.

    Well I mean, more real world information isn't going to solve unsolved mathematics or computer science problems. Once you have the priors, it pretty much is just pure reasoning to try to solve issues like P=NP or proving the Continuum Hypothesis.

  • Onavo 2 hours ago

    Reinforcement learning. At the current pace of VLM research and multimodal robotic control models, there will be a robot in every home soon.

  • lupire 2 hours ago

    What makes you think AI can't connect to the world?

    It can control robots, and I can retax listen to audio, watch video. All it's missing is smelling and feeling, which are important but could be built out as soon as the other senses stop providing huge incremental value.

    The real problem holding back Superintillegence is that it is if infinitely expensive and has no motivation.

    • johnisgood 2 hours ago

      Food for thought: there are humans without the ability to smell, and there is alexithymia, where people have trouble identifying and expressing emotions (it counts right?). And then there is ASPD (psychopathy), autism spectrum disorder, neurological damage, etc.

ipython 2 hours ago

If we have concerns about unregulated power of AI systems, not to worry - the US is set to ban regulations on “artificial intelligence systems or models” for ten years if the budget bill that just passed the house is enacted.

Attempts at submitting it as a separate submission just get flagged - so I’ll link to it here. See pages 292-294: https://www.congress.gov/119/bills/hr1/BILLS-119hr1rh.pdf

  • rakete 2 hours ago

    Oh I heard about that one, but didn't realize it is part of that "big beautiful tax bill"? Kind of crazy.

    So is this like free-for-all now for anything AI related? Can I can participate by making my own LLM with pirated stuff now? Or are only the big guys allowed to break the law? Asking for a friend.

    • OgsyedIE an hour ago

      The law doesn't matter, since the bill also prohibits all judges in the USA, every single one, from enforcing almost all kinds of injunctions or contempt penalties. (§70302, p.562)

      • alwa an hour ago

        > 70302. Restriction of funds No court of the United States may use appropriated funds to enforce a contempt citation for failure to comply with an injunction or temporary restraining order if no security was given when the injunction or order was issued pursuant to Federal Rule of Civil Procedure 65(c), whether issued prior to, on, or subsequent to the date of enactment of this section.

        Doesn't that just require that the party seeking the injunction or order has to post a bond as security?

        • OgsyedIE 23 minutes ago

          Yes, the required security is proportional to the costs and damages of all parties the court may find wrongfully impacted.

  • rixed an hour ago

      « (1) IN GENERAL.—Except as provided in paragraph (2), no State or political subdivision thereof may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10-year period beginning on the date of the enactment of this Act. »
    
    Does it actually make sense to pass a law that restrict future laws? Oh got it, that's federal state preventing any state passing their own laws on that topic.
  • CalRobert an hour ago

    """ ... IN GENERAL .—Except as provided in paragraph (2), no State or political subdivision thereof may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10 year period beginning on the date of the enactment of this Act...

    """

    (It goes on)

  • SV_BubbleTime an hour ago

    Right, because more regulation makes things so much better.

    I’d rather have unrestricted AI than moated regulatory capture paid for by the largest existing players.

    • ceejayoz an hour ago

      This is "more regulation" on the states (from the "states' rights" party, no less), and concentrates the potential for regulatory capture into the largest player, the Feds. Who just accepted a $400M gift from Qatar and have a Trump cryptocurrency that gets you access to the President.

  • baggy_trough an hour ago

    That is not true. It bans regulation at the state and local level, not at the federal level.

    • ipython 40 minutes ago

      Ok. From the party of “states rights” that’s a bit hypocritical of them. I mean- they applauded Dodds which basically did the exact opposite of this- forcing states to regulate abortion rather than a uniform federal standard.

      • baggy_trough 24 minutes ago

        Dobbs did not force states to regulate abortion. It allowed them to.

        • ceejayoz a minute ago

          Yes, that's the hypocrisy.

          Abortion: "Let the states regulate! States' rights! Small government! (Because we know we'll get our way in a lot of them.)"

          AI: "Don't let the states regulate! All hail the Feds! (Because we know we won't get our way if they do.)"

    • ceejayoz an hour ago

      Unless the Feds are planning to regulate - which, for the next few years, seems unlikely - that's functionally the same.

    • drewser42 an hour ago

      So wild. The Republican party has hard-pivoted to a strong, centralized federal government and their base just came along for the ride.

      • baggy_trough 36 minutes ago

        The strong federal government that bans regulation?

        • ceejayoz 27 minutes ago

          They're not banning regulation, they want total control over it.

          • baggy_trough 23 minutes ago

            They in fact are banning regulation at the state and local level.

            • ceejayoz 17 minutes ago

              Yes, which is a big fat regulation on what states and local governments can do.

  • sandworm101 an hour ago

    It is almost as if the tech bros have gotten what they paid for.

    This will soon be settled once the Butlerian forces get organize.

JKCalhoun an hour ago

Fear gets our attention. That alone makes it suspect to me: fear smells like marketing.

airocker 2 hours ago

I want to bet 10 million that this won’t happen if anyone wants to go against my position. Best bet ever. If i lose, i don’t have to pay anyways.

  • thatguysaguy 2 hours ago

    Some people do actually have end of the world bets out but you have to structure it differently. What you do is the person who thinks the world will end is paid cash right now, and then in N years when the world hasn't ended they have to pay back some multiple of the amount the original amount.

    • throwanem 2 hours ago

      Assuming you can find them. If I took a bet like that you'd have a hell of a time finding me!

      (I'm sure serious, or "serious," people who actually construct these bets of course require the "world still here" payout be escrowed. Still.)

      • spencerflem an hour ago

        If you escrow the World Still Exists payment you lose the benefit of having the World Ends payment immediately.

        • throwanem an hour ago

          Yeah, it isn't a kind of bet that makes any sense except as a conversation starter. Imagine needing to pay so much money for one of those!

    • radicalcentrist an hour ago

      I still don't get how this is supposed to work. So let's say I give you a million dollars right now, with the expectation that I get $10M back in 10 years when the world hasn't ended. You obviously wanted the money up front because you're going to live it up while the world's still spinning. So how am I getting my payout after you've spent it all on hookers and blow?

      • thatguysaguy an hour ago

        Yeah I wouldn't make a deal like this with someone who is operating in bad faith... The cases I've seen of this are between public intellectuals with relatively modest amounts of money.

        • radicalcentrist 11 minutes ago

          Well that's what I don't get, how is spending the money bad faith? Aren't they getting the money ahead of time so they can spend it before the world ends? If they have to keep the world-still-here money tied up in escrow I don't see why they would take the deal.

    • Joker_vD 2 hours ago

      This is such an obviously bad idea; I've heard anecdotes of embezzlement cases where investigation took more than e.g. 5 years, and when it was finally established that yes, the funds really were embezzled and they went after the perpetrator, it turned out that the guy had died a year before due to all of the excesses he spent the money on.

      I mean, if you talk from the position of someone who doesn't believe that the world will end soon.

    • rienbdj 2 hours ago

      How can you work around if you don’t have millions upfront?

  • alecco 23 minutes ago

    You can start that bet on prediction markets.

  • baq 2 hours ago

    same with nuclear war. end of the world is bullish

mountainriver 2 hours ago

I can't believe anyone still gives this guy the time of day. He didn't know what test/train split was, but is an AI expert? Give me a break

  • Aurornis an hour ago

    Do you have a source for this? I've seen this repeated but nobody can ever produce any evidence.

  • GeorgeTirebiter 2 hours ago

    I don't think he's a bozo; but every technology needs a contrarian, to keep the technologists from spinning too much hype.

    • copperx 2 hours ago

      I thought that contrarian was Jaron Lanier.

throwanem 2 hours ago

[flagged]

  • lherron 2 hours ago

    HN front page isn’t what it used to be.

throw310822 an hour ago

[flagged]

  • baxtr an hour ago

    What does that mean?

    • throw310822 an hour ago

      The expression "canary in the coal mine" means "early warning sign" because miners used to bring canaries into mines as probes to check for toxic gases- the canaries died first.

      The joke- or at least as I interpreted it- is that Gary Marcus himself would be the literal canary- i.e. he would literally die.

      • baxtr an hour ago

        But why would he die with AGI? I don’t get it :D

        • throw310822 42 minutes ago

          I guess simply out of spite?