Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.
Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
Tennesee(!) leads the way, a bill to make training chatbots a Class A felony.
Hope they get the fullthroated support of LW
Reddit /r/artificial freaks out (no clue what alignment that subreddit has): https://old.reddit.com/r/artificial/comments/1slu23a/red_alert_tennessee_is_about_to_make_building/
via HN: https://news.ycombinator.com/item?id=47784650
edit aww the coward lawmakers have backed down https://www.wjhl.com/news/tennessee-backs-off-sweeping-artificial-intelligence-limits-opts-for-study-instead/
https://www.cnbc.com/2026/04/15/allbirds-bird-stock-shoes-ai.html
Struggling shoe retailer Allbirds makes bizarre pivot from shoes to AI, stock explodes more than 400%
I had such a hard time coming up with an original joke for this, until i realized the reason why is that allbirds is stealing jokes from the dotcom bubble in the first place.
The company, valued around $4 billion at its peak, sold its intellectual property and other assets two weeks ago for $39 million. The stock surged over 400%, from under $3 a share up to $13. The shoe company had a market cap of about $21 million Tuesday.
Oh. so, bit of a misleading headline there CNBC. This wasn’t a real publicly traded company, it was a company on life support that got pivoted by a greedy founder looking to cash in. Cynical move or the delusions of a true believer? does it matter?
Regardless, the stupidity is too much, the resemblance too striking. good luck to Allbirds in the totally normal footwear-to-high tech pivot that is happening in this totally normal economy.
as someone who’s disabled, the idea of “ethical eugenics” pisses me off to no end. There is no ethical eugenics! You’re systematically destroying classes of people because they don’t fit your standards, there is no way to make it ethical when the very core premise involves taking away human rights
Fugly tech-bro shoe company pivots to AI, juicing failing stock.
https://www.seattletimes.com/business/allbirds-soars-373-after-sneaker-firm-rebrands-as-ai-stock/
New Blood in the Machine, about the escalating violence against the slop-mongers.
To distract us from the ongoing cycle of violence and discourse about violence that neither cracks down or addresses it’s causes, may I offer the fruit of today’s YouTube rabbit hole:
Eliezer joins the trend of condemning “political” violence with confidence on the far end of the dunning-kruger curve: https://www.lesswrong.com/posts/5CfBDiQNg9upfipWk/only-law-can-prevent-extinction
I’ve already mocked this attitude down thread and in the previous weekly thread, so I’ll try to keep my mockery to a few highlights…
He’s admitting nuke the data centers is in fact violence!
It would be beneath my dignity as a childhood reader of Heinlein and Orwell to pretend that this is not an invocation of force.
But then drawing a special case around it.
But it’s the sort of force that’s meant to be predictable, predicted, avoidable, and avoided. And that is a true large difference between lawful and unlawful force.
I don’t think Eliezer has checked the news if he think the US government carries out violence in predictable or fair or avoidable ways! Venezuela! (It wasn’t fair before Trump, or avoidable if you didn’t want to bend over for the interest of US capital, but it is blatantly obvious under Trump) The entire lead up to Iran consisted of ripping up Obama’s attempts at treaties and trying to obtain regime change through surprise assassination! Also, if the stop AI doomers used some clever cryptography scheme to make their policy of property destruction (and assassination) sufficiently predictable and avoidable would that count as “Lawful” in Eliezers book?
If he kept up with the DnD/Pathfinder source material, he would know Achaekek’s assassins are actually Lawful EvilThe ASI problem is not like this. If you shut down 5% of AI research today, humanity does not experience 5% fewer casualties. We end up 100% dead after slightly more time.
His practical argument against non-state-sanctioned violence is that we need a total ban (and thus the authority of state driving it), because otherwise someone with 8 GPUs in a basement could invent strong AGI and doom us all. This is a dumb argument, because even most AI doomers acknowledge you need a lot of computational power to make the AGI God. And they think slowing down AGI (whether through violence or other means) might buy time for another sort of solution that is more permanent (like the idea of “solve alignment” Eliezer originally promised them). Lots of lesswrong posts regularly speculate on how to slow down the AI race and how to make use of the time they have, this isn’t even outside the normal window of lesswrong discourse!
Statistics show that civil movements with nonviolent doctrines are more successful at attaining their stated goals
Sources cited: 0
One of the comments also pisses me off:
Which reminds me about another point: I suspect that “bomb data centers” meme causal story was not somebody lying, but somebody recalling by memory without a thought that such serious allegation maybe is worthy to actually look up it and not rely on unreliable memory.
“Drone strike the data centers even if starts nuclear war” is the exact argument Eliezer made and that we mocked. It is the rationalists that have tried to soften it by eliding over the exact details.
Yud says so much, and its often so confusing, that I think a lot of his followers don’t know his main messages. It used to be orthodox that you cannot have a two-faced message any more without each audience learning what you say to the others, but that assumed you were a good communicator aiming at a mass audience.
Yud has strange views about legal responsibility:
Anthropic Claude Mythos is already a state-level actor in terms of how much harm it could theoretically have done – given its demonstrated and verified ability to find critical security vulnerabilities in every operating system and browser; and how fast Mythos could’ve exploited those vulnerabilities, with ten thousand parallel threads of intelligent attack. Mythos hypothetically rampant or misused could have taken down the US power grid, say… at the end of its work, after introducing hard-to-find errors into all the bureaucracies and paperwork and doctors’ notes connected to the Internet.
But if you release a virus and it infects people, we don’t hold the virus responsible, we hold you. If you build a car and it explodes when it gets rear-ended, we don’t blame the car, we blame you.
eliezer misses that (as used in decolonization/civil rights era) nonviolence is effectively a sophisticated propaganda strategy that takes existing injustices and violence and uses it to bait opponent into attacking you, all while your own people take photos and show to entire world carefully crafted messaging that appeals to general public conscience. the messaging part is extremely important in this. there’s no fucking way this could work for him because his cause is comprehensible only to those who already buy his cult messaging as ground truth. he’s in just for the moral superiority of being nonviolent. he’s never gonna get it because comprehending it requires touching grass
Yeah both non-violence and pure terrorism are communication forms at the root. I remember reading long ago that the Rote Armee Fraktion’s master plan was:
- commit horrific acts of violence against pillars of the community / rob banks to get money
- said acts would unleash a repressive wave of violence from the state
- the proletariat would see this repressive wave, wake up, and cause the revolution
It kinda stopped at stage 2, because the BRD’s security services were a bit less ex-Nazi than they expected, and also there was basically no proletariat.
Also the Southern police chief who correctly deduced that mass arrests were what the civil rights activists wanted, got the go-ahead from neighboring county jails, and then politely and non-violently arrested everyone protesting and spread them out over a wider area, thus preventing the media-friendly repression that was the goal.
Yeah there are only so many ways to get it going, you don’t hear about these that don’t figure it out because cops bust them making them look like clowns and nobody wants to get associated with them afterwards
there is also a barrier between step 2 and 3, because sometimes news like that are suppressed. american school shootings get that treatment sometimes, not to mention all the info filtering at facebook and friends. this is why sympathetic media is an important bit to have in advance. there’s also this bit where any serious insurgency needs money and it looks like what they got didn’t work out
that southern police chief was per blogpost Laurie Pritchett and this kind of thinking is also what makes COIN tick. worry not, Hegseth declared it all woke nonsense
But it’s the sort of force that’s meant to be predictable, predicted, avoidable, and avoided. And that is a true large difference between lawful and unlawful force.
Remember the cartoon of the bombs being dropped on people and the people going ‘I hear the next bombs will be sent by a woman’, this but ‘with lawful force’.
We end up 100% dead after slightly more time.
On a long enough timeframe…
Statistics show that civil movements with nonviolent doctrines are more successful at attaining their stated goals
This is always one of those things that baffles me, and makes it clear to me these people have never even been close to any real movement. All these movements have violent and non-violent parts. Hell, you see it even now with the far right, they have a violent and non-violent part, and the non-violent part scores points by pointing to their violent friends and going ‘we are not with them’ while going to the same parties, sharing the same ideas, and all being friends with each other. Hell, look at the various LW people who went ‘wow, all these rightwingers in our mids are horrible’ and then not stopping being friends with them. I see now how Sam got the drop on all these naive people.
This feels somehow tied to the whole “agentic” thing I’ve ranged about previously. Like, individual acts of violence are strictly destructive because the people doing it aren’t sufficiently “agentic” to change things, even though American history is full of cases where (usually racist) vigilante violence had a huge impact on people’s decision-making. But when the government does it it’s different because people in government got there by proving their agency and ability to actually impact the world. Like, it feels almost like he’s offended that the NPCs might try and do something as drastic as killing someone without GM permission.
Meanwhile in reality, people legitimately do feel like they don’t have a lot of options to protect themselves from the real harms this industry is doing, to say nothing of the people who buy his line about the oncoming class-K end-of-life scenario. Anger is an appropriate response to the circumstances we find ourselves in, and in a nation that has been quietly cultivating a culture of heroic violence for decades we shouldn’t be surprised to see people trying to inflict that fear and rage upon the outside world.
in a nation that has been quietly cultivating a culture of heroic violence for decades we shouldn’t be surprised to see people trying to inflict that fear and rage upon the outside world.
Nay a culture where every citizen is entitled to one armed crashout and threats of such have been an important lever used by the party that believes in that entitlement for decades.
new odium symposium episode is now available on all platforms. we look Avgi Saketopoulou and Ann Pellegrini’s Gender Without Identity, a contemporary work of queer psychoanalytic theory. then we look at a case study in which it all goes wrong.
https://www.patreon.com/posts/14-just-call-me-155052365
also we’re starting a discord for the podcast https://discord.gg/7tEEE39Fx
also also we’re going to release our first subscriber episode next week, where we look at the pseudoscientists of paper repository viXra
Psychoanalysis really does seem to push the most obnoxious boundary in academic language. On one hand, it is legitimately valuable to create a specific framework to enable experts to talk about technical elements of the field. It reminds me of the old IT rant about users who think “turn on the computer” means “turn the screen on, no need to touch the actual computer part”. But at the extreme it creates opacity for its own sake and makes it hard for people who haven’t devoted their careers to the field to understand what’s being done. Particularly in a medical or psychiatric field where the patient is by definition in a lower-information group than the person treating them, this amounts to making it hard for the patient to understand (and therefore consent) to what is being done to them. I am by no means immune to the simple pleasure of knowing something that other people don’t, especially when the outside world reaffirms the value of that knowledge, and there is definitely a place for the specificity that this kind of jargon enables, but psychoanalysis seems to consistently stretch it too far.
Maybe it’s just because I’m rolling back through Age of Mythology, but I died laughing at “it’s like the centaur, Helen”

There is just something so inherently smug and annoying about Mollick. He is one of those low information boosters whose posts sound intellectual until you really think about them.
Tell me more about how the pile of cursed spaghetti that is Claude code is now viable due to model breakthroughs. All I see are hype men saying “the new model is a team of PhDs in your pocket” and then releasing disappointing updates or saying “the new model is too dangerous” because they have some vaporware powered by human crowdsourcing.
Also coding is not like other areas - you can test for hallucinations by compiling and printing and running tests.
I guess my first mistake this morning was opening linkedin
I’ve never understood how these things are simultaneously gaining their abilities based on statistical analysis of all kinds of random writings online including social media, fanfic, reddit, etc. but also are simultaneously supposed to end up as experts rather than a much faster and more agreeable dumbass. Like, the training data may include all the great works of literature, all the scrapable scientific studies and textbooks they could steal, and so on. But it also included every moron who ever shared conspiracy theories on Twitter, every confident-sounding business idiot on LinkedIn, and every stupid word that Scott or Yud ever wrote. Surely the bullshit has to exceed the expertise by raw volume, and if they took the time and energy to curate it out the way they would need to to correct that they wouldn’t be left with a large enough sample to actually scale off of.
Basically, either I’m dramatically misunderstanding something or the best we can hope for is the Average Joe on Reddit, who may not be a complete dumbass but definitely isn’t a team of PhDs.
LLMs generate the next most probable token given the previous context of tokens they have (not an average of the entire internet). And post-training shifts the odds a bit further in a relatively useful direction. So given the right context the LLM will mostly consistently regurgitate content stolen from PhDs and academic papers, maybe even managing to shuffle it around in a novel way that is marginally useful.
Of course, that is only the general trend given the righttm prompt. Even with a prompt that looks mostly right, one seemingly innocuous word in the wrong place might nudge the odds and you get the answer of a moron /r/hypotheticalphysics in response to a physics question. Or a asking for a recipe gets you elmer’s glue on your mozarella pizza from a reddit joke answer.
if they took the time and energy to curate it out the way they would need to to correct that they wouldn’t be left with a large enough sample to actually scale off of
They do steps like train the model generally on the desired languages with all the random internet bullshit, and then fine-tuning it on the actually curated stuff. So that shifts the odds, but again, not enough to actually guarantee anything.
So tldr; you’re right, but since it is possible to get somewhat better than average internet junk with curating and post-training and prompting, llm boosters and labs have convinced themselves they are just a few more iterations of data curation and training approaches and prompting techniques away from entirely eliminating the problem, when the best they can do is make it less likely.
“Cursed spaghetti”
🤌
LW stalwart discovers kids get sniffles from daycare, obviously this means women have to stay at home to take care of kids and not work:
https://www.lesswrong.com/posts/byiLDrbj8MNzoHZkL/daycare-illnesses
BTW almost every person born after 1970 in Sweden has been to daycare as a kid, if daycare illnesses had long-term consequences it would be showing up here
They do have consequences! Crowded schools and preschools and daycare spread all kinds of dangerous infectious diseases and there are consequences of getting them. With the arrival of COVID and the decline in vaccination risks are rising.
‘Getting sick builds resistance’ is another of the folk medicine beliefs which we in the infection control community have been fighting since 2020. Some diseases are milder the second or third time, but generally you want to get as few infectious diseases as possible.
Sweden is an interesting example because they pioneered the let-it-rip approach to COVID. That was less disastrous than it could have been but not great even in a country with a lot of detached housing and nuclear families. https://kevinmd.com/2025/01/swedens-controversial-covid-19-strategy-lessons-from-higher-mortality-rates.html I would not have recommended putting children in daycare without strict indoor-air-quality standards between 2020 and 2024.
Covid is an exception, and believe me, if the main victims of Covid had been kids instead of old people stuffed into elder-care facilities, forgotten by everyone, the dynamics around masking and vaccines and lockdowns would have been a lot different.
My point is that most kids in Sweden go to daycare, “daycare sickness” (where the whole family comes down with enteritis etc) is a common thing, and as far as I know the country doesn’t stand out in health stats.
You can argue that the loss of productivity from this is a factor, but as you mention in a parallell comment, the authorities can demand better hygiene and air quality in preschools and schools, and it would be cheaper than outfitting every single home.
In 2026 I would be most concerned about measles.
ok the takes on the attempted firebombing of sama’s mansion are coming in from the rats and those that watch them. Credit to letting stuff marinate , I guess, and/or not working on a weekend
no clue who this dude is, has a slobsuck with .ai domain but makes sense:
https://www.campbellramble.ai/p/the-rational-conclusion
Weird MtG scarecrow Zvi plays moral philosopher, invokes multiple authorities on Xhitter:
https://thezvi.substack.com/p/political-violence-is-never-acceptable
the worst kind of violence, the sort against people like me
all those other deaths? those aren’t violence
we also need to care more about property
Right? If it had been some poor schlub manning the security desk at a datacenter, it would have been a blip. But this is a VC we’re talking about!!
The Zvi post really pisses me off for continuing to normalize Eliezer’s comments (in a way that misrepresents the problems with them).
This happened quite a bit around Eliezer’s op-ed in Time in particular, usually in highly bad faith, and this continues even now, equating calls for government to enforce rules to threats of violence, and there are a number of other past cases with similar sets of facts.
Eliezer called for the government to drone strike data centers, even of foreign governments not signatories to international agreements, and even if doing so risked starting nuclear war.
Pacifism is at least a consistent position, but instead rationalists like Zvi want to simultaneously disown the radical actions, but legitimizes the US’s shit show of a foreign policy.
Another thing that pisses me off is the ahistorical claim by rationalist that such actions are ineffective and unlikely to succeed. Asymmetric warfare and terrorist tactics have obtained success many times in history! The kkk successfully used terrorism to repress a population for a century. The black panthers got gun control passed in California and put pressure on political leaders to accept the more peaceful branch of the civil rights movement. The IRA got the Good Friday agreement. The US revolution! All the empires that have withdrawn from Afghanistan!
Overall though… I guess this is a case of two wrongs making a sorta right. They are dangerously wrong about AI doom, but at least they are also wrong about direct action and so usually won’t take the actions implied by their beliefs. (But they are still, completely predictably, inspiring stochastic terrorists).
Yeah, what the fuck is this passage
If you believe that If Anyone Builds It, Everyone Dies, then you should say that if anyone builds it, then everyone dies. Not moral blame. Cause and effect. Note that this is importantly different from ‘anyone who is trying to build it is a mass murderer.’
(note the rat-tic of using “importantly” as an adjective)
This deftly evades the main question - how do we ensure that no-one builds it? There’s a host of options, and political violence is one of them. I guess categorically stating it’s off the table is a start, but Zvi has the moral gravitas of a dormouse. If I was of the political voilence bent I’d probably commit some just to spite him.
It’s a willful refusal to actually consider the consequences of their beliefs, which is deeply ironic for a bunch that pride themselves on their hardcore consequentialism. Like, even if you just mean “if anyone builds it, everyone dies” as a simple cause and effect, that should imply some kind of action unless you don’t think everyone dying would be bad actually.
how do we ensure that no-one builds it?
Eliezer made a lesswrong post yesterday where he explains that since anyone could build it, lone acts of violence are obviously ineffective and the only solution is the right and proper (“Lawful” as he calls it, because he has been stuck on DnD since writing Planecrash) state violence which can enforce a worldwide ban (which you may recall Eliezer has put at the absurdly low 8 2024 GPUs).
All their doom scenarios are made-up sci-fi bullshit, so of course they have free rein to pontificate about the right and wrong ways to prevent them. And because they are high on their own sci-fi, they downplay or neglect or misunderstand the real harms of the rising slop sea. Consequently, they fail to grasp the real social reaction to acts of violence.
they fail to grasp the real social reaction
side-note… I wonder what the overlap is between rationalist that showed up to their stupid “march for billionaires” and AI doomers?
Aella showed up.
Excited for the labor and contract law disputes that this will spawn when the model makes promises that the person won’t keep https://www.theverge.com/tech/910990/meta-ceo-mark-zuckerberg-ai-clone
But as as expected, this is another zuck project that doesn’t have the leg(itimation)s
“All of those embodied agents are seat opportunities,” Jha said, envisioning organizations with more agents than humans — each effectively a user that must pay for a software license, or “seat” in industry lingo.
A company with 20 employees might buy 20 Microsoft 365 licenses today. If each employee gets five AI agents, and the workforce shrinks to 10 people, that could still mean 50 paid seats.
Also, it’s apparently enough for an LLM endpoint to be paired with an email inbox to be considered an “embodied agent”, words mean nothing.
Also, it’s apparently enough for an LLM endpoint to be paired with an email inbox to be considered an “embodied agent”, words mean nothing.
This is a very interesting glimpse into the managerial class’ psyche. A person is their email address. Very simple. Why would you need more than that.
It’s ludicrous to pay taxes on the wealth your robots make, but it’s savvy business to charge each software-delimited robot as a separate being - just like charging per-cpu-core was!
Ah right, I need to get a 365 license for word, which comes with a free copilot agent, who needs a 365 license for its copy of word, which comes with a free copilot agent, who needs a …
Now that we’ve got the concept of recursive per-seat licensing established, allow me to invite you to contemplate the possibility of the “licensing macro”
JFC at least wait until you have a de-facto monopoly before musing about extracting the rents! This is capitalism 101.
Ahh sh*t if all my rent-seeking employee-reducing dreams come true, i’ll lose money on my product subscriptions rents! Quick! I should come up with bullshit that will solve everything!
Do you get a refund when an “agent” inevitably blows out its context window and starts emitting deranged output, or does that automatically get rolled over into starting up the next “agent”
[ai booster voice] if you last tried a head of AI more than six months ago, you need to try the new model. https://old.reddit.com/r/apple/comments/1ska7kn/apples_ai_chief_john_giannandrea_departs_this_week/
Not really a sneer, just wondering what to make of it, if it doesn’t belong here please remove.
The Financial Times goes with a study which ostensibly demonstrates that ca. half a million of potential coding jobs were directly eliminated by AI, not any other factors or general industry slowdown. The idea is it’s mainly junior positions which aren’t tightly “bundled” with other domains or just years of programming experience & intuition which are harder for AI to replace. So is AI really fully replacing juniors in the hundreds of thousands, or is there more going on?
or is there more going on?
One idea I’ve read about (heavily developed by Ed Zitron, but also a few other news sources and commentators have put it forward) is that SaaS (Software as a Service) businesses were heavily over invested in expectation of basically infinite growth over the past decade. SaaS growth was “exponential” in its early days, but then various needs of the market were basically saturated, so SaaS companies squeezed more growth out cutting cuts or upping how much they charged, and now it is finally catching up to them.
The AI hype means almost everyone tries to interpret everything the lines of AI causing it. The recent price correction in many SaaS companies was (mis)interpreted as the threat of vibe-coded replacements forcing them to cut costs. The SaaS companies trying to cut costs and going through layoffs is being misinterpreted as AI successfully replacing junior devs.
that looks like a heaping pile of correlation, without mentioning the general downturn
Big “Don’t mention the war” energy
There is no cost-cutting in Ba Sing Se
So I don’t have time to read the full paper and I probably don’t have the background to make an informed critique of the methodology once I do (not that that’s gonna stop me). But I feel like the challenge here is in mapping the distinction between junior and senior coding roles. To what extent do the senior coders get treated like a distinct job as opposed to being junior-but-seasoned?
Based on a quick amateur read of the abstract it looks like they’re assuming the first option, that junior and senior developers are separate roles that can be largely disentangled. But if the other option is true, then in the event of a general industry downturn (say, after over hiring during recent periods of unsustainable growth) then it might make sense to look at the cuts to junior roles as simply removing the less efficient and effective people from the development role, rather than specifically cutting the juniors because they’re uniquely exposed to AI replacement.
I don’t know which model is more accurate to how the industry treats these roles or whether it varies by organization or what, but that’s what seems like the most likely alternate explanation for the observed shift towards a very senior-heavy workforce.
And seniors obviously grow on senior trees (assuming that this take is actually true)











