Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(December’s finally arrived, and the run-up to Christmas has begun. Credit and/or blame to David Gerard for starting this.)
More grok shit: https://futurism.com/artificial-intelligence/grok-doxxing it in contrast to most other models, is very good at doxing people.
Amazing how everything Musk makes is the worst in class (and somehow the Rationalists think he will be their saviour (that is because he is a eugenicist)).
the base use for LLMs is gonna be hypertargetted advertising, malware, political propaganda etc
well the base case for LLMs is that, right now
the privacy nerds won’t know what hit them
u wot m8???!!
Show HN: I analyzed 8k near-death experiences with AI and made them listenable
psychic damage warning, obviously
A second post on software project management in a week, this one from deadsimpletech: failed software projects are strategic failures.
A window into another it disaster I wasn’t aware of, but clearly there is no shortage of those. An australian one this time.
And of course, without having at least some of that expertise in-house, they found themselves completely unable to identify that Accenture was either incompetent, actively gouging them or both.
(spoiler alert, it was both)
Interesting mention of clausewitz in the context of management, which gives me pause a bit because techbros famously love the “art of war”, probably because sun tzu was patiently explaining obvious things to idiots and that works well on them. “On war” might be a better text, I guess.
A lobster wonders why the news that a centi-millionaire amateur jet pilot has decided to offload the cost of developing his pet terminal software onto peons begging for contributions has almost 100 upvotes, and is absolutely savaged for being rude to their betters
https://lobste.rs/s/dxqyh4/ghostty_is_now_non_profit#c_b0yttk
bring back rich people rolling their own submarines and getting crushed to death in the bathyal zone
enter hashimoto. cringe intensifies
(note: out-of-order to linked post for comment cohesion)
Terminals are an invisible technology to most
what a fucking sentence
…that are hyper present in the everyday life of many in the tech industry.
hyper? like this?
But the terminal itself is boring, the real impact of Ghostty is going to be in libghostty and making all of this completely available for many use cases. My hope is that through building a broadly adopted shared underlayer of terminals around the industry we can do some really interesting things.
oh good so the rentier bridgetroll wants to do just a monopoly play? that’s fine I’m sure. note: I don’t think there’s a more charitable reading of this. those shared underlayers already exist, in the form of decades of protocol and other development. many of them suck and I agree about trying to do better, but I (rather strongly) suspect hashi and I have very different ideas of what that looks like
I’ve already addressed the belittling of the project I really find useful and care about. So let’s just move on to the financial class.
Regardless of my financial ability to support this project, any project that financially survives (for or non-profit) at the whims of a single donor is an unhealthy project
“uwu, think of the poor projects. yes sure I could throw $20m at this in some kind of funny trust and have it live forever but that wouldn’t allow me to evade the point so much!”
I paid a 9-figure tax bill and also donated over 5% of my other stuff to charity this year
“I’m not as bad as the other billionaires I promise”
I’m too fucking old to care about hipster terminals, so I had no idea ghostty was started by a (former) billionaire. If forced to choose a new terminal I will certainly take this fact into consideration.
all things aside, is current ghostty any good, or still an
audiophileconsolephile-ware?i’m generally reluctant to try something which reeks of intensive self-promotion, but few months ago i decided to finally see what’s the hype about, and, well, it’s a terminal emulator.
wezterm does much more, and with a much cleaner ui, and it’s programmable, and the author doesn’t remind me that hashicorp is a thing that exists.
second person today I saw mentioning wezterm, guess I should look sometime for familiarity
I took psychic damage by scrolling up and seeing promptsimon posting a real doozie:
I have been enjoying hitting refresh on https://fuckthisurl/froztbyte-scrubbed-it-intentionally throughout today and watching the number grow - it’s nice to see a clear example of people donating to a new non-profit open source project.
“oooh! look at the vanity project go! weeeee, isn’t having a famous face attached to it fun?” with exactly no reflection on the fucking daunting state of open source funding in multiple other domains and projects
/r/SneerClub discusses MIRI financials and how Yud ended up getting paid $600K per year from their cache.
Malo Bourgon, MIRI CEO, makes a cameo in the comments to discuss Ziz’s claims about SA payoffs and how he thinks Yud’s salary (the equivalent of like 150.000 malaria vaccines) is defensible for reasons that definitely exist, but they live in Canada, you can’t see them.
Guy does a terrible job explaining literally anything. Why, when trying to explain all the SA based drama, does he choose to create an analogy where the former employee is heavily implied to have murdered his wife?
S/o to cinnaverses for mixing it up in there.
“Nah, salary stuff is private”, starting to think this sort of stuff is an idea introduced to protect capital and nobody else.
I was teasing this out in my head to try come up with a good sneer. First thought: for an organisation that tries to appeal to EAs, you’d think that they would do a good job of being transparent about why so much money is being spent on someone with such low output. But immediate rebuttal: the whole point of the TESCREAL cult shit is that yud get free tuocs because he’s the chosen one to solve alignment.
Was thinking more about how the radical, dont fall to biasses think for yourself and cone here to really learn to think (so we can stop the paperclipmachine and resurrect the dead) defend a half million dollars salary with a ‘thats private’.
But that is the same conclusion. The prophet must be protected.
there’s some more cursor fun too. no sneers yet, I’ve barely started reading
saw it via jonny who did do some notes
oh I just saw this is almost a month old! still funny tho
(and I’ve been busy af afkspace)
saw this elsewhere. the account itself appears to be a luckey stan account, but the next
There’s more crust than air or sea or land… so a vehicle that moves through the crust of the earth is going to be a huge deal
I have built working prototypes of this
so are we talking mining, or The Core (2003)? it feels like he’s trying to pitch it as though it’s Tiberian Sun style subterrean APC, but I can’t be sure whether I’m reading into it
I’m thinking nydus worms from SC2 or the GLA tunnel system in C&C generals.
right? like, is felon finally getting competition for unhinged billionaire gamerposting?
announcing “leeroy jenkins” mode for grok where it just posts your tweet drafts and you can’t delete them
Major RAM/SSD manufacturer Micron just shut down its Crucial brand to sell shovels in the AI gold rush, worsening an already-serious RAM shortage for consumer parts.
Just another way people are paying more for less, thanks to AI.
This seems like a bit of a desperation pivot while the bubble money is still flowing. I’ve heard they struggled a bit with shipping PCIE CXL memory that’s capable of memory sharing between rackmount nodes, so they’re probably taking everything from the consumer channel and cramming it into the enterprise channel in a bid to be the low-cost/high-volume provider. I would expect them to eventually come limping back into the consumer market to much marketing fanfare, alongside trying to set a higher price floor there, similar to Taco Bell bringing back the Mexican pizza.
Bleugh, I’ve been using crucial ram and flash for a hell of a long time, and they’ve always been high quality and reasonably priced. I dislike having to find new manufacturers who don’t suck, especially as the answer seems to be increasingly “lol, there are no such companies”.
Thanks to the ongoing situation in the us, it doesn’t look like the ai bubble is going to pop soon, but I can definitely see it causing more damage like this before the event.
Nice rant in the entry on OSNews about this… love the phrase “MLMs for unimpressive white males”.
I can see it making sense, what with CPUs moving to integrated RAM, and probably CPU-integrated flash, to maximize speed. The business of RAM and flash drive upgrades will become a very large but shrinking retrocomputing niche probably served by small Chinese fabs.
what with CPUs moving to integrated RAM
Can I blame Apple for this
https://law.justia.com/cases/federal/district-courts/michigan/miedce/4:2025cv11168/384571/176/
Consistent with Magistrate Judge Patti’s warning that each AI citation might incur a cost of $200 per citation, the court adopts that amount and imposes a fine of $300 per Plaintiff (a total of $600) for three misrepresented, AI-generated citations.
lol
Dang that judge was angry.
Here’s docket #170: https://storage.courtlistener.com/recap/gov.uscourts.mied.384571/gov.uscourts.mied.384571.170.0.pdf – the complaining about not being allowed to use AI is on page 14 and 16 (it’s pretty awful reading I almost gave up before reaching that point)
a pro se litigant should not be threatened with per-citation fines before any violation.
lmao
…I will freely admit to not knowing the norms of courtroom conduct, but isn’t having preestablished penalties for specific infractions central to the whole concept of law itself.
Apparently we are part of the rising trend of AI denialism
Author Louis Rosenberg is “an engineer, researcher, inventor, and entrepreneur” according to his PR-stinking Wikipage: https://en.wikipedia.org/wiki/Louis_B._Rosenberg. I am sure he is utterly impartial and fair with regards to AI.
Computer scientist Louis Rosenberg argues that dismissing AI as a “bubble” or mere “slop” overlooks the tectonic technological shift that’s reshaping society.
“Please stop talking about the bubble bursting, I haven’t handed off my bag yet”
We are three paragraphs and one subheading down before we hit an Ayn Rand quote. This clearly bodes well.
A couple paragraphs later we’re ignoring both the obvious philosophical discussion about creativity and the more immediate argument about why this technology is being forced on us so aggressively. As much as I’d love to rant about this I got distracted by the next bit talking about how micro expressions will let LLMs decode emotions and whatever. I’d love to know this guy’s thoughts on that AI-powered phrenologist features a couple weeks ago.
i hereby propose a new metric for a popular publication, the epstein number (Ē), denoting the number of authors who took flights to epstein’s rape island. generally, credible publications should have
Ē=0. this one, after a very quick look, hasĒ=2, and also hosts sabine hossenfelder.Absolutely savage 10/10 no notes
it’s some copium of tremendous potency to misidentify public sentiment (https://www.pewresearch.org/internet/2025/04/03/how-the-us-public-and-ai-experts-view-artificial-intelligence/) for movement (ignore the “AI experts” these are people surveyed at a certain machine learning conference, really could be substituted by 1000 clones of Sutskever)
github produced their
annual insights into the state of open source and public software projectsbarrel of marketing slop, and it’s as self-congratulatory as unreadable and completely opaque.tl;dr: AI! Agents! AI! Agents! AI! Agents! AI…
Just one thing that caught my attention:
AI code review helps developers. We … found that 72.6% of developers who use Copilot code review said it improved their effectiveness.
Only 72.6%? So why the heck are the other almost 30% of devs using it? For funsies? They don’t say.
You’d think due to self selection effects most people who wouldn’t find using Copilot effective wouldn’t use it.
The only way that number makes sense to me is if people were force to use Copilot and… no, wait, that checks out.
Etymology Nerd has a really good point about accelerationists, connects them to religion
I like this. Kinda wish it was either 10x longer and explained things a bit, or 10x shorter and was more shitposty. Still, good
New and lengthy sneer from Current Affairs just dropped: AI is Destroying the University and Learning Itself
article is informing me that it isn’t X - it’s Y
Another day, another instance of rationalists struggling to comprehend how they’ve been played by the LLM companies: https://www.lesswrong.com/posts/5aKRshJzhojqfbRyo/unless-its-governance-changes-anthropic-is-untrustworthy
A very long, detailed post, elaborating very extensively the many ways Anthropic has played the AI doomers, promising AI safety but behaving like all the other frontier LLM companies, including blocking any and all regulation. The top responses are all tone policing and such denying it in a half-assed way that doesn’t really engage with the fact the Anthropic has lied and broken “AI safety commitments” to rationalist/lesswrongers/EA shamelessly and repeatedly:
I feel confused about how to engage with this post. I agree that there’s a bunch of evidence here that Anthropic has done various shady things, which I do think should be collected in one place. On the other hand, I keep seeing aggressive critiques from Mikhail that I think are low-quality (more context below), and I expect that a bunch of this post is “spun” in uncharitable ways.
I think it’s sort of a type error to refer to Anthropic as something that one could trust or not. Anthropic is a company which has a bunch of executives, employees, board members, LTBT members, external contractors, investors, etc, all of whom have influence over different things the company does.
I would find this all hilarious, except a lot of the regulation and some of the “AI safety commitments” would also address real ethical concerns.
This would be worrying if there was any risk at all that the stuff Anthropic is pumping out is an existential threat to humanity. There isn’t so this is just rats learning how the world works outside the blog bubble.
I mean, I assume the bigger the pump the bubble the bigger the burst, but at this point the rationalists aren’t really so relevant anymore, they served their role in early incubation.
If rationalists could benefit from just one piece of advice, it would be: actions speak louder than words. Right now, I don’t think they understand that, given their penchant for 10k word blog posts.
One non-AI example of this is the most expensive fireworks show in history, I mean, the SpaceX Starship program. So far, they have had 11 or 12 test flights (I don’t care to count the exact number by this point), and not a single one of them has delivered anything into orbit. Fans generally tend to cling on to a few parlor tricks like the “chopstick” stuff. They seem to have forgotten that their goal was to land people on the moon. This goal had already been accomplished over 50 years ago with the 11th flight of the Apollo program.
I saw this coming from their very first Starship test flight. They destroyed the launchpad as soon as the rocket lifted off, with massive chunks of concrete flying hundreds of feet into the air. The rocket itself lost control and exploded 4 minutes later. But by far the most damning part was when the camera cut to the SpaceX employees wildly cheering. Later on there were countless spin articles about how this test flight was successful because they collected so much data.
I chose to believe the evidence in front of my eyes over the talking points about how SpaceX was decades ahead of everyone else, SpaceX is a leader in cheap reusable spacecraft, iterative development is great, etc. Now, I choose to look at the actions of the AI companies, and I can easily see that they do not have any ethics. Meanwhile, the rationalists are hypnotized by the Anthropic critihype blog posts about how their AI is dangerous.
I chose to believe the evidence in front of my eyes over the talking points about how SpaceX was decades ahead of everyone else, SpaceX is a leader in cheap reusable spacecraft, iterative development is great, etc.
I suspect that part of the problem is that there is company in there that’s doing a pretty amazing job of reusable rocketry at lower prices than everyone else under the guidance of a skilled leader who is also technically competent, except that leader is gwynne shotwell who is ultimately beholden to an idiot manchild who wants his flying cybertruck just the way he imagines it, and cannot be gainsayed.
This looks like it’s relevant to our interests
Hayek’s Bastards: Race, Gold, IQ, and the Capitalism of the Far Right by Quinn Slobodian
https://press.princeton.edu/books/hardcover/9781890951917/hayeks-bastards
Cory Doctorow has plugged this on his blog, which is usually a good signal for me.
He came by campus last spring and did a reading, very solid and surprisingly well-attended talk.
Always thought she should have stuck to acting.
(I know, Hayek just always reminds me of how people put his quotes over Hayeks image, and people just get really mad at her, and not at him. Always wonder if people would have been just as mad if it was Friedrichs image and not Salmas due to the sexism aspect).









