TheAceOfHearts 11 hours ago

> At the core of most definitions you’ll find the idea of a machine that can match humans on a wide range of cognitive tasks.

I expect this definition will be proven incorrect eventually. This definition would best be described as a "human level AGI", rather than AGI. AGI is a system that matches a core set of properties, but it's not necessarily tied to capabilities. Theoretically one could create a very small resource-limited AGI. The amount of computational resources available to the AGI will probably be one of the factors what determines whether it's e.g. cat level vs human level.

  • dr_dshiv 10 hours ago

    That’s like Peter Norvig’s definition of AGI [1] which is defined with respect to general purpose digital computers. The general intelligence refers to the foundation model that can be repurposed to many different contexts. I like that definition because it is clear.

    Currently, AGI is defined in a way where it is truly indistinguishable from superintelligence. I don’t find that helpful.

    [1] https://www.noemamag.com/artificial-general-intelligence-is-...

    • mlyle 7 hours ago

      I think "being able to do as well as a 50th percentile human who's had a little practice," on a wide range of tasks, is a pretty decent measure.

      Yes, that's more versatile than most of us, because most of us are not at or above the median practiced person in a wide range of tasks. But it's not what I think of when I hear "superintelligence," because its performance on any given task is likely still inferior to the best humans.

      • dr_dshiv 3 hours ago

        Remember, that AI is a “jagged frontier.”

        AI is already better than a 50th percentile human on many/most intellectual tasks. Chess, writing business plans, literature reviews, emails, motion graphics, coding…

        So, if we say “AI is not AGI” because 1. It can’t do physical tasks or 2. it can’t replace intellectual human labor yet in most domains (for various reasons) or 3. <insert reason for not being AGI>, then it stands to reason that by the time we reach AGI, it will already be superintelligent (smarter than humans in most domains)

        • mlyle 2 hours ago

          > then it stands to reason that by the time we reach AGI, it will already be superintelligent (smarter than humans in most domains)

          > > Yes, that's more versatile than most of us, because most of us are not at or above the median practiced person in a wide range of tasks. But it's not what I think of when I hear "superintelligence," because its performance on any given task is likely still inferior to the best humans.

          > AI is already better than a 50th percentile human on many/most intellectual tasks. Chess, writing business plans, literature reviews, emails, motion graphics, coding…

          Note the caveat above of "with some practice." That's much less clear to me.

      • fragmede 7 hours ago

        That seems like a personal definition for super intelligence. I don't think I'm alone in assuming super intelligence needs to be greater than all humans for it to be considered super vs "pretty good intelligence".

        • mlyle 5 hours ago

          > > I think "being able to do as well as a 50th percentile human who's had a little practice," on a wide range of tasks, is a pretty decent measure.

          > That seems like a personal definition for super intelligence.

          I was giving a definition for artificial general intelligence as distinguished from super-intelligence, since the poster above said that most definitions of AGI were indistinguishable from super-intelligence.

          To me, a computer doing as well as a practiced human at a wide swath of things is AGI. It's artificial; it's intelligence, and it's at least somewhat general.

          • fragmede 5 hours ago

            Ah. I misread you then. Apologies.

  • pants2 6 hours ago

    While we're posting our favorite definitions of AGI, I like what Francois Chollet once said:

    "AGI is reached when it’s no longer easy to come up with problems that regular people can solve … and AIs can’t."

  • Dylan16807 8 hours ago

    That definition gives me a headache. If it's not up to the level of a human then it's not "general". If you cut down the resources so much that it drops to cat level, then it's a cut down model related to an AGI model and no more.

  • turtletontine 10 hours ago

    What does this even mean? How can we say a definition of “AGI” is “correct” or “incorrect” when the only thing people can agree on, is we don’t have AGI yet?

labrador 10 hours ago

Ray Kurzweil and his "Age of Spiritual Machines", which I read in 1999, is much more to blame than the others like Goertzel that came after him but Kurzweil doesn't get a mention. Kurzweil is also a MIT grad closely associated with MIT and possibly the MIT Technology Review.

  • chubot 9 hours ago

    Yeah totally, not a single mention of Kurzweil in this article. I also read “Age of spiritual machines” in 1999 (in college), and skimmed most of his subsequent books

    Then Kurzweil became my manager’s peer at Google in 2014 or so (actually 2 managers). I remember he was mocked by a few coworkers (and maybe deservedly so, because they had some mildly funny stories)

    So I have been wondering with all the AGI talk why Kurzweil isn’t talked about more. Was he vindicated in some sense?

    I did get a partial answer - one reason is that doomer AGI prophecies are better marketing than Kurzweil’s brand of AGI, which is about merging with machines

    And of course both kinds of AGI prophecies are good distractions from AI ethics, which is more likely to slow investment than to grow it

    • labrador 8 hours ago

      > Was he vindicated in some sense?

      No. He's still saying AGI will demand political rights in 2029. Like Geoffrey Hinton, Kurzweil gets a pass because he's brilliant and acomplished. But also like Hinton, he's wrong about this one issue. With Hinton it appears to be fear driving his fantasies. With Kurzwel it's probably over-confidence.

      • Terr_ 8 hours ago

        > > Kurzweil’s brand of AGI, which is about merging with machines

        > With Kurzwel it's probably over-confidence.

        It's his fear of mortality, which also helps explain the "merging" emphasis.

        Every new Kurzweil "prediction" involves technologies that are just amazing-enough and the timeline just aggressive-enough that they converge into a future where a guy of Ray Kurzweil's age just manages to hop onto the first departure of the train to immortality.

        If y'all have seen any exception to that pattern, please let me know, I'm genuinely curious.

        • labrador 7 hours ago

          Yes, I think you're right. I forgot he was a leading transhumanist.

  • mateo411 9 hours ago

    Yes, the Singularity came from Kurzweil.

    • tim333 9 hours ago

      Well, it predated Kurzweil. The term first came up with John von Neumann in the 50s. But Kurzweil has promoted it.

Terr_ 11 hours ago

Very little I disagree with there, so just nibbling at the edges.

> a scheme that’s flexible enough to sustain belief even when things don’t work out as planned; the promise of a better future that can be realized only if believers uncover hidden truths; and a hope for salvation from the horrors of this world.

Sometimes 90% of the "hidden truths" are things already "known" by the believers, an elite knowledge that sets them apart from the sheeple. The remaining 10% is acquiring some McGuffin that finally proves they were Right-All-Along so that they can take a victory lap.

> Superintelligence is the hot new flavor—AGI but better!—introduced as talk of AGI becomes commonplace.

In turn, AGI was the hot new flavor—AI but better!—companies pivoted to as consumers started getting disappointed/jaded experiencing "AI" that wasn't going to give them robot butlers.

> When those people are not shilling for utopia, they’re saving us from hell.

Yeah, much like how hatred is not really the opposite of love, the "AI doom" folks are really just a side-sect of the "AI awesome" folks.

> But what if there are, in fact, shadowy puppet masters here—and they’re the very people who have pushed the AGI conspiracy hardest all along? The kings of Silicon Valley are throwing everything they can get at building AGI for profit. The myth of AGI serves their interests more than anybody else’s.

Yes, the economic engine behind all this, the potential to make money, is what really supercharges everything and lifts it out of niche communities.

everdrive 10 hours ago

People are interested in consciousness much the same way that we see faces in the clouds. We just think we're going to find it everywhere: weather patterns, mountains, computers, robots, in outer space, etc.

If we were dogs, we'd invent a basic computer and start writing scifi films about whether the computers could secretly smell things. We'd ask "what does the sun smell like?"

ethin 10 hours ago

I always find the claims that we'll have AGI impossible to believe, on the basis that nobody even knows what AGI is. The definition is so vague and hand-wavy that it might as well have never been defined in the first place. As in: I seriously can't think of a definition that would actually work. I'll explain my thought process, because I may be over-analyzing things.

If we define it as "a machine that can match humans on a wide range of cognitive tasks," that begs the questions: which humans? Which range? What cognitive tasks? I honestly think there is no answer you could give to these three alone that wouldn't cause everything to break down again:

For the first question, if you say "all humans," how do you measure that?

If we use IQs? If so, then you have just created an AI which is able to match the average IQ of whatever "all" happens to be. I'm pretty sure (though have no data to prove) that the vast super-majority of people don't take IQ tests, if they've ever even heard of them. So that limits your set to "all the IQ scores we have". But again... Who is we? Which test organization? There are quite a few IQ testing centers/orgs, and they all have variations in their metrics, scoring, weights, etc.

If you measure it by some other thing, what's the measurement? What's the thing? And, does that risk us spiraling into an infinite debate on what intelligence is? Because if so, the likelihood of us ever getting an AGI is nil. We've been trying to define intelligence for literally thousands of years and we still can't find a definition that is even halfway universal.

If you say anything other than all, like "the smartest humans" or "the humans we tested it against," well... Do I really need to explain how that breaks?

For the second and third questions, I honestly don't even know what you'd answer. Is there even one? Even if we collapse the second and third questions into "what wide range of cognitive tasks?", who creates the range of tasks? Are these tasks ones any human from, lets say, age 5 onward would be capable of doing? (Even if you answer yes here, what about those with learning disabilities or similar who may not be able to do whatever tasks you set at that age because it takes them longer to learn?) Or, are they tasks a PhD student would be able to do? (If you do this, then you've just broken the definition again.)

Even if we rewrite the definition to be narrower and less hand-wavy, like, an AI which matches some core properties or something, as was suggested elsewhere in these comments, who defines the properties? How do we measure them? How do we prove that us comparing the AI against these properties doesn't cause us to optimize for the lowest common denominator?

Krasnol 11 hours ago

> Ilya Sutskever, cofounder and former chief scientist at OpenAI, is said to have led chants of “Feel the AGI!” at team meetings.

There is...chanting in team meetings in the US?

Has this been going for long now or is this some new trend picked up in Asia or something like that?

  • teeray 10 hours ago

    It is said chanting pleases the LLM spirits and may bring forth the promised AGI god.

  • fabian2k 11 hours ago

    I don't think that is new. Back when Walmart tried to expand to Germany it was reported that they had employees do some Walmart chant. As you can guess this didn't go over well with German employees.

    • aomix 11 hours ago

      I was coming in with the Walmart example too. The onboarding meeting told us he overheard it at a Korean manufacturer and liked it.

    • Krasnol 10 hours ago

      Yeah I heard that too but I assumed this is just a thing in this sector not something actually high paid employees have to participate in.

  • uvaursi 11 hours ago

    FEEL THE AGI.

    This is a meme that will keep on giving.

tim333 9 hours ago

The article is kind of silly like someone in the 1800s saying flying machines are a conspiracy theory. I mean obviously not - they were a proposed technology that hasn't been built yet but would in a while. I don't really get the point. I guess it's a discussion point of a sort. Maybe it's like those Weekly World News 'Gay Aliens Found' type headlines that get views for imaginative silliness?

  • xanderlewis 9 hours ago

    But flying machines are well defined, or at least it's easily possible to come up with a good definition. 'A machine capable of transporting a person from a to b without touching the ground at any point in between', or whatever.

    For AGI, that's very far from being true.

    • tim333 8 hours ago

      Well there are paper darts and weather balloons but most people were interested in a powered machine to transport people. Likewise with AGI but I'm guessing most people are thinking of something that can do what people do?

    • nitwit005 8 hours ago

      People did genuinely struggle to define "useful flying machine", which is why you see the description of the Wright Brother's flight come with so much detail: "first controlled, sustained flight of a powered airplane".

JohnMakin 9 hours ago

I largely agree with much of this, but would like to pose the following hypothetical:

Let's say the AGI true believers, the really big players out there believe 2 things simultaneously that seem impossible together:

1) AGI will revolutionize humanity and propel us to unimaginable progress

2) AGI will destroy humanity

What if they actually believe 2) is a precursor to 1)? A lot of them seem to be building bunker fortresses on private islands right now. It seems like there is a certain class of very rich person that thinks yes, this current iteration of civilization is doomed (whether you believe it to be nuclear war, climate change, AGI, whatever), but, by virtue of them being Very Smart and Wealthy Human Beings, they can ride that out, and create a new civilization built on what was left behind.

I've not deep dove into a lot of writings by these guys but this kind of attitude seems to ooze from it. They're not concerned, because they think they're the ones that won't be affected. I think history says otherwise - when civilizations undergo catastrophic events, it tends to be the minority ruling classes that get eaten alive, not the other way around. I guess we'll see either way, right or wrong.

jongjong 10 hours ago

Based on my personal experience, I feel like we've already had AGI for some time. Just based on how centralized society has become. It feels like the system is not working for the vast majority of people, yet somehow it's still holding together in spite of enormous complexity... It FEELS like there is some advanced intelligence holding things together. Some aspects of the system's functioning seems too clever to be the result of human intelligence.

Also, in retrospect, something doesn't quite add up about the 'AI winter' narrative. It's hard to believe that so many people were studying and working on AI and it took so long given that ultimately, attention is all you need(ed).

I studied AI at university in Australia over a decade ago, did the introductory course which was great; we learned about decision trees, Bayesian probability and machine learning; we wrote our own ANNs from scratch. then I took on the advanced course, expecting to be blown away by the material, but the whole course was about mathematics, no AI theory; even back then there was a lot of advanced material which they could have covered (e.g. evolutionary computation) but didn't... I dropped out after a week or two because of how boring it was.

In retrospect, I feel like the course was made boring and irrelevant on purpose. I remember I even heard someone in my entourage mention that AI winter is not real... While we were supposedly in the middle of it.

Also, I remember thinking at the time that evolutionary computation combined with ANNs was going to be the future... So I was kind of surprised how evolutionary computation seemingly disappeared out of view... In retrospect though, I think to myself; progress in that area could potentially lead to unpredictable and dangerous outcomes so it may not be discussed openly.

Now I think; take an evolutionary algorithm and combine it with modern neural nets with attention mechanisms and you'd surely get some impressive results.

  • mateo411 9 hours ago

    I think the AI winter was over by 2007. There was a lot of hype about machine learning and big data. The Netflix Prize for building a Recommender model launched in 2006. There was research on neural networks and deep belief networks, but they weren't as popular as they are today.

    • uvaursi 4 hours ago

      And then Netflix decided to churn out garbage that no AGI recommendation engine can redeem.

retube 11 hours ago

I had to click 5 (yes 5, I counted) pop up overlays to get to the article (including 2 cookie ones, cos I guess the usual one is not infuriating enough)

  • hgomersall 11 hours ago

    I just clicked on the article mode on Firefox. Worked perfectly.

nitwit005 10 hours ago

Imagine someone in the year 1900 started talking about moon landings, and risk of extinction from atomic weapons. A clearly unhinged individual.

That the claims appear extreme and apocalyptic doesn't tell us anything about correctness.

Yes, there are tons of people saying nonsense, but look back at events. For a while it seemed as though AI was improving extremely quickly. People extrapolated from that. I wouldn't call that extrapolation irrational or conspiratorial, even if it proves incorrect.

  • teamonkey 9 hours ago

    If a person in 1900 wrote a novel about landing on the moon, they would be a sci-fi author.

    If they discussed what a future moon landing might be like or how it could work, they would be a futurist.

    If they were raising funds for a moon landing that they are currently working on, and success is surely imminent, despite not having any evidence that they can achieve it, or that they have beaten the technical hurdles necessary to do so, then they would be seen as a fraud.

    It doesn’t really matter that at some point in the future the moon landings happened.

    • nitwit005 9 hours ago

      The reason for bringing up that idea is clarified by the sentence right after. What is the point of ignoring everything past the first sentence?

scarmig 10 hours ago

> It’s this myth that’s the root of the AGI conspiracy. A smarter-than-human machine that can do it all is not a technology. It’s a dream, unmoored from reality.

So, if you assume that AGI is fake and impossible, it's... A conspiracy. Sure.

Though, if you just finished quoting Turing (and folks like von Neumann), who thought it is possible, it would be good form for you to offer some reasoning that it's impossible, without alluding to the ineffable human soul or things like that.

  • Libidinalecon 8 hours ago

    The best thing I have ever read is von Neumann's ridiculous ideas and predictions about the weather.

    It is the ultimate example of always having to be on guard against argumentum ad verecundiam.

  • Terr_ 10 hours ago

    > if you assume that AGI is fake and impossible

    That seems like a bad straw-man for "AI boosterism has the following hallmarks of conspiratorial thinking".

    > offer some reasoning that it's impossible

    Further on, the author has anticipated your objection:

    > And there it is: You can’t prove it’s not true. [...] Conspiracy thinking looms again. Predictions about when AGI will arrive are made with the precision of numerologists counting down to the end of days. With no real stakes in the game, deadlines come and go with a shrug. Excuses are made and timelines are adjusted yet again.

    • scarmig 10 hours ago

      If it makes you angry that people want to work to build AGI--people who have thought about it a lot more than you--you can't convince them to stop by repeatedly yelling "I don't think it's possible, you're a fool!"

      No more than yelling "electricity is conspiracy thinking/Satan's plaything!" repeatedly would have stopped engineers in the 19th century from studying and building with it.

      • Terr_ 9 hours ago

        > If it makes you angry that people want to work to build AGI

        What's this, a second straw-man? So quickly after the first?

        TFA never condemned invention or hard work, nor does it agree with the "doomers" who consider the target-invention to be fundamentally bad. At most it's a critique of a set of beliefs/rationalizations plus choices made by investors.

        > makes you angry [...] people who have thought about it a lot more than you [...] repeatedly yelling [...] "you're a fool!"

        Who's angry? Who's making it personal?

        I think reading the article made you angry... and you're projecting it onto everybody else.

      • delusional 10 hours ago

        Yet yelling: "why would have to die from blood loss, we can transfuse some right here" At Jehova's witnesses actually does help some.

        We don't have to save everybody, but only by trying to we save some.

AlexandrB 10 hours ago

One thing that struck me recently is that LLMs are necessarily limited by what's expressible with existing language. How can this ever result in AGI? A lot of human progress required inventing new language to represent new ideas and concepts. An LLM only experience of the world is what can be expressed with words. Meanwhile, even a squirrel has an intuitive understanding of the laws of gravity that are beyond the ability of an LLM to ever experience because it's stuck in a purely conceptual, silicon prison.

  • XenophileJKO 8 hours ago

    I don't know why you would think that the model can't create new language. That is a trivial activity. For example, I asked GPT5 to read the news and make a new word.

    Wattlash /ˈwɒt-læʃ/

    n. The fast, localized backlash that erupts when AI-era data centers spike electricity demand—triggering grid constraints, siting moratoriums, bill-shock fears, and, paradoxically, a rush into fixes like demand-response deals, waste-heat reuse, and nuclear/fusion PPAs.

  • SpicyLemonZest 9 hours ago

    > Meanwhile, even a squirrel has an intuitive understanding of the laws of gravity that are beyond the ability of an LLM to ever experience because it's stuck in a purely conceptual, silicon prison.

    I just don't think that's true. People used to say this kind of thing about computer vision - a computer can't really see things, only compute formulas on pixels, and "does this picture contain a dog" obviously isn't a mathematical formula. Turns out it is!

  • thrance 10 hours ago

    They experience the world through tokens, which can contain more information than just words. Images can be tokenized, so can sounds, pressure sensors, etc.

Copenjin 11 hours ago

Like conspiracies, it works only on the most fragile and on people that already have an adjacent set of beliefs. AGI/ASI is all bullshit narratives but yeah, we have useful artifacts that will get better even if they never become A*I.

FridayoLeary 11 hours ago

> And there it is: You can’t prove it’s not true. “The idea that AGI is coming and that it’s right around the corner and that it’s inevitable has licensed a great many departures from reality,” says the University of Edinburgh’s Vallor. “But we really don’t have any evidence for it.”

That's the most important paragraph in the article. All of the self serving excessive exaggerations of Sam Altman and his ilk, predicting things and throwing out figures they cannot possibly know. "ai will cure cancer, and demetia! And reverse global warming! Just give more money to my company which is a non profit and is working for the good of humanity. What is that? Do you mean to say you don't care about the good of humanity?" What is the word for such behaviour? It's not hubris, it's a combination of wild prophecy and severe main character syndrome.

I heard once, though i have no idea if it's true that he claims he carries a remote control around with him to nuke his data centres if they ever start trying to kill everyone. Which is obviously nonsense but is exactly the kind of thing he might say.

In the meantime they're making loads of money by claiming expertise in a field which doesn't even exist and in my opinion never will, and that's the main thing i suppose.

  • Krasnol 11 hours ago

    > I heard once, though i have no idea if it's true that he claims he carries a remote control around with him to nuke his data centres if they ever start trying to kill everyone.

    That would be quite useless even if it exists since now that you said it, the AGISGIAIsomething will surely know about it and take appropriate measures!

    • FridayoLeary 11 hours ago

      Oh no! Someone better phone up Sam Altman and warn him of my terrible blunder. I would hate to be the one responsible for the destruction of the entire universe.

dang 10 hours ago

[stub for offtopicness]

  • 7734128 11 hours ago

    Who tolerates a website which immediately pushes three overlapping pop-ups in free user's face?

    Why would anyone subject themselves to so much hatred? Have some standards.

    • ceejayoz 11 hours ago

      Who raw-dogs the internet without an adblocker? Have some standards.

      • mort96 11 hours ago

        I use uBlock Origin (the full fat version in Firefox, not the lite version). It doesn't help, because the pop-ups aren't ads. There's one asking me if I wanna be spied on, one asking me to subscribe or sign in, and one huge one telling me that there's currently a discount on subscriptions.

        • ceejayoz 10 hours ago

          I've got uBlock Origin on Firefox desktop too, and none of those show. Turn on more of the filter lists in the settings - especially the stuff in the "Cookie notices" and "Annoyances" sections.

      • Workaccount2 10 hours ago

        The people making it still worthwhile to post content online.

      • erikerikson 11 hours ago

        Rumor has it that some people saw the "ads pay for it all" business model and accepted the deal because they wanted the Internet to be sustainable.

        • ceejayoz 10 hours ago

          I mean, that's a two-sided deal. "You watch ads, you read content". But that deal has been more and more broken by the ad networks and websites; a lot of sites are unnavigable without an adblocker now.

          The days of plain text Google AdWords are long, long gone.

        • XorNot 10 hours ago

          If you don't actually buy anything then you're not sponsoring anything.

          In fact generating ad views and not purchasing things from them reduces the value of ads to the website.

          • slg 10 hours ago

            Your logic is built on the common misconception that the only ad that has any value is the last ad you see before purchase.

    • krupan 10 hours ago

      You should probably ask an AI to read it and summarize it for you.

      • lproven 10 hours ago

        I see what you did there, and I approve.

    • hagbard_c 11 hours ago

      Only those who made the mistake of not using a content filter like uBlock Origin or something equally effective. I just visited the site and got neither pop-ups nor ads.

  • oldestofsports 10 hours ago

    To those who say ”just use an adblocker” - if your local cafe had a group of waiters beating puppies right inside the entrance, would you just wear earplugs and close your eyes? Oh how low humanity has sunk when we accept such garbage.

  • oldestofsports 10 hours ago

    Not one but TWO cookie consent banners, one with a huge list of radio buttons to disable, then two more banners, and 3 seconds into reading once again an ad blocks the screen. Who the heck tolerates this

  • cocomutator 11 hours ago

    today is the day i stopped reading opinion pieces from the technologyreview. not for presenting an opinion i don’t agree with, but for mistaking word soup for an argument.

    • boothby 11 hours ago

      There are two competing figures in motion: human intelligence and computer intelligence. Either can win by sufficient reduction of the other.

jahewson 11 hours ago

I very much dislike the way this article blurs religious and doomsday thinking with conspiracy theory thinking. There’s nobody conspiring on the other side of AGI. Other than that it makes many good observations.

SpicyLemonZest 10 hours ago

> Maybe some of you think I’m an idiot: You don’t get it at all lol. But that’s kind of my point. There are insiders and outsiders. When I talk to researchers or engineers who are happy to drop AGI into the conversation as a given, it’s like they know something I don’t. But nobody’s ever been able to tell me what that something is.

They have, including multiple times in this very article, but the author's not willing to listen. As he says later:

> But set aside the technical objections—what if it doesn't continue to get better?—and you’re left with the claim that intelligence is a commodity you can get more of if you have the right data or compute or neural network. And it’s not.

Modern AI researchers have proven that this is not true. They routinely increase the intelligence of systems by training on different data, using different compute, or applying different network architectures. But the author is absolutely convinced that this can't be so, so when researchers straightforwardly explain that they have done this, he's stuck trying to puzzle out what they could possibly mean. He references "Situational Awareness", an essay that includes detailed analyses of how researchers do this and why we should expect similar progress to continue, but he interprets it as a claim that "you don’t need cold, hard facts" because he presumes that the facts it presents can't possibly be true.

foxfired 10 hours ago

I remember when ChatGPT 3.5 was going to be AGI, then 4, then 4o, etc. It's kinda like the dooms day predictions, even if they fail it's ok. Because the next one though, oh that's the real doomsday. I, for one, am waiting for a true AI Scotsman [0].

[0]: https://idiallo.com/byte-size/ai-scotsman

  • solumunus 10 hours ago

    I honestly don’t remember many if any people saying that at all, those would have been extremely fringe positions.

    • jvelo 10 hours ago

      Robert Scoble was saying something of that effect about the upcoming ChatGPT 4 if I remember correctly

    • HarHarVeryFunny 8 hours ago

      It started with GPT-2 - not sure if OpenAI really believed it, or were just hyping it up, but they initially withheld public release of GPT-2 because it was "too powerful and dangerous"...

      • solumunus 3 hours ago

        Obvious marketing if you ask me.

    • delusional 10 hours ago

      From Sam's own blog: "We are now confident we know how to build AGI as we have traditionally understood it."

      Another quote: "Trying GPT-4.5 has been much more of a 'feel the AGI' moment among high-taste testers than I expected!"

      • solumunus 3 hours ago

        So not quite. We’re all aware that some players engage in Musk style marketing.