

That’s what we call a win-win scenario
That’s what we call a win-win scenario
It goes without saying that the AI-risk and rationalist communities are not morally responsible for the Zizians any more than any movement is accountable for a deranged fringe.
When the mainstream of the movement is ve zhould chust bomb all datacenters, maaaaaybe they are?
Yudkowsky was trying to teach people how to think better – by guarding against their cognitive biases, being rigorous in their assumptions and being willing to change their thinking.
No he wasn’t.
In 2010 he started publishing Harry Potter and the Methods of Rationality, a 662,000-word fan fiction that turned the original books on their head. In it, instead of a childhood as a miserable orphan, Harry was raised by an Oxford professor of biochemistry and knows science as well as magic
No, Hariezer Yudotter does not know science. He regurgitates the partial understanding and the outright misconceptions of his creator, who has read books but never had to pass an exam.
Her personal philosophy also draws heavily on a branch of thought called “decision theory”, which forms the intellectual spine of Miri’s research on AI risk.
This presumes that MIRI’s “research on AI risk” actually exists, i.e., that their pitiful output can be called “research” in a meaningful sense.
“Ziz didn’t do the things she did because of decision theory,” a prominent rationalist told me. She used it “as a prop and a pretext, to justify a bunch of extreme conclusions she was reaching for regardless”.
“Excuse me, Pot? Kettle is on line two.”
If I had Bluesky access on my phone, I’d be dropping so much lore in that thread. As a public service. And because I am stuck on a slow train.
The genocide understander has logged on! Steven Pinker bluechecks thusly:
Having plotted many graphs on “war” and “genocide” in my two books on violence, I closely tracked the definitions, and it’s utterly clear that the war in Gaza is a war (e.g., the Uppsala Conflict Program, the gold standard, classifies the Gaza conflict as an “internal armed conflict,” i.e., war, not “one-sided violence,” i.e., genocide).
You guys! It’s totes not genocide if it happens during a war!!
Also, “Having plotted many graphs” lolz.
Christ, what a fucking asshole.
It’s Jonathan Ladd saying,
Scott Alexander, the most influential figure in the online rationalist movement, wrote a review praising white supremacist Richard Hanania’s book The Origins Of Woke in 2024.
Yesterday, he congratulated Hanania on the Trump admin adopting the recommendations.
With a link to Scott Adderall’s blog.
Sam Altman is talking about bringing online “tens of thousands” and then “Hundreds of thousands” of GPUs. 10,000 GPUs costs them $113 million a year, 100k $1.13bn, so this is Sam Altman committing to billions of dollars of compute for an expensive model that lacks any real new use cases. Suicide.
Also, $1.30 per hour per GPU is the Microsoft discount rate for OpenAI. Safe to assume there are other costs but raw compute for GPT 4.5 is massive and committing such resources at this time is truly fatalistic, and suggests Altman has no other cards to play
Whilst flipping through LessWrong for things to point and laugh at, I discovered that Sabine Hossenfelder is apparently talking about “AI” now.
Sabine Hossenfelder is a theoretical physicist and science communicator who provides analysis and commentary on a variety of science and technology topics.
She also provides transphobia using false balance rhetoric.
x.AI released its most recent model, Grok 3, a week ago. Grok 3 outperformed on most benchmarks
And truly, no fucks were given.
Grok 3 still features the same problems of previous LLM models, including hallucinations
The fundamental problem remains fundamental? You don’t say.
Be sure to pick up your copy of The War on Science, edited by … Lawrence Krauss, featuring … Richard Dawkins and … Jordan Peterson.
Buchman on Bluesky wonders,
How did they not get a weinstein?
Tim Burners-Lee
(snerk)
From elsewhere in that thread:
The physics of the 1800s had a lot of low hanging fruit. Most undergrads in physics can show you a derivation of Maxwell’s equations from first principles, and I think a fair few of them could have come up with it themselves if they were in Maxwell’s shoes.
Lol no
central preference vector […] central good-evil discriminator
bro is this close to reinventing g but for morality
Declaring that an AI is malevolent because you asked it for a string of numbers and it returned 420
For those who missed the news, yes, tickets are on sale.
an oppositional culture
[enraged goose meme] “Oppositional to what, motherfucker? Oppositional to what?!”
Ian Millhiser’s reports on Supreme Court cases have been consistently good (unlike the Supreme Court itself). But Vox reporting on anything touching TESCREAL seems pretty much captured.
AOC:
They need him to be a genius because they cannot handle what it means for them to be tricked by a fool.
The MIRI, CFAR, EA triumvirate promised not just that you could be the hero of your own story but that your heroism could be deployed in the service of saving humanity itself from certain destruction. Is it so surprising that this promise attracted people who were not prepared to be bit players in group housing dramas and abstract technical papers?
Good point.
Logic. Rationality. Intelligence. Somewhere in all these attempts to harness them for our shared humanity, they’d been warped and twisted to destroy it.
Oh, the warping and twisting started long before Ziz. (The Sequences are cult shit.)
another day volunteering at the octopus museum. everyone keeps asking me if they can fuck the octopus. buddy,