Came across this fuckin disaster on Ye Olde LinkedIn by ‘Caroline Jeanmaire at AI Governance at The Future Society’

"I’ve just reviewed what might be the most important AI forecast of the year: a meticulously researched scenario mapping potential paths to AGI by 2027. Authored by Daniel Kokotajlo (>lel) (OpenAI whistleblower), Scott Alexander (>LMAOU), Thomas Larsen, Eli Lifland, and Romeo Dean, it’s a quantitatively rigorous analysis beginning with the emergence of true AI agents in mid-2025.

What makes this forecast exceptionally credible:

  1. One author (Daniel) correctly predicted chain-of-thought reasoning, inference scaling, and sweeping chip export controls one year BEFORE ChatGPT existed

  2. The report received feedback from ~100 AI experts (myself included) and earned endorsement from Yoshua Bengio

  3. It makes concrete, testable predictions rather than vague statements that cannot be evaluated

The scenario details a transformation potentially more significant than the Industrial Revolution, compressed into just a few years. It maps specific pathways and decision points to help us make better choices when the time comes.

As the authors state: “It would be a grave mistake to dismiss this as mere hype.”

For anyone working in AI policy, technical safety, corporate governance, or national security: I consider this essential reading for understanding how your current work connects to potentially transformative near-term developments."

Bruh what is the fuckin y axis on this bad boi?? christ on a bike, someone pull up that picture of the 10 trillion pound baby. Let’s at least take a look inside for some of their deep quantitative reasoning…

…hmmmm…

O_O

The answer may surprise you!

  • scruiser@awful.systems
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    10 hours ago

    Is this the corresponding lesswrong post: https://www.lesswrong.com/posts/TpSFoqoG2M5MAAesg/ai-2027-what-superintelligence-looks-like-1 ?

    Committing to a hard timeline at least means making fun of them and explaining how stupid they are to laymen will be a lot easier in two years. I doubt the complete failure of this timeline will actually shake the true believers though. And the more experienced grifters forecasters know to keep things vaguer so they will be able to retroactively reinterpret their predictions as correct.

    • -dsr-@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      9 hours ago

      Every competent apocalyptic cult leader knows that committing to hard dates is wrong because if the grift survives that long, you’ll need to come up with a new story.

      Luckily, these folks have spicy autocomplete to do their thinking!

      I was going to make a comparison to Elron, but… oh, too late.

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        ·
        9 hours ago

        I think Eliezer has still avoided hard dates? In the Ted talk, I distinctly recall he used the term “0-2 paradigm shifts” so he can claim prediction success for stuff LLMs do, and paradigm shift is vague enough he could still claim success if its been another decade or two and there has only been one more big paradigm shift in AI (that still fails to make it AGI).

        • istewart@awful.systems
          link
          fedilink
          English
          arrow-up
          5
          ·
          7 hours ago

          Huh, 2 paradigm shifts is about what it takes to get my old Beetle up to freeway speed, maybe big Yud is onto something