Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
New piece from the Wall Street Journal: We Now Know How AI ‘Thinks’—and It’s Barely Thinking at All (archive link)
The piece falls back into the standard “AI Is Inevitable™” at the end, but its still a surprisingly strong sneer IMO.
It bums me out with cryptocurrency/blockchain and now “AI” that people are afraid to commit to calling it bullshit. They always end with “but it could evolve and become revolutionary!” I assume from deep seated FOMO. Journalists especially need more backbone but that’s asking too much from WSJ I know
I think everyone has a deep-seated fear of both slander lawsuits and more importantly of being the guy who called the Internet a passing fad in 1989 or whenever it was. Which seems like a strange attitude to take on to me. Isn’t being quoted for generations some element of the point? If you make a strong claim and are correct then you might be a genius and spare people a lot of harm. If you’re wrong maybe some people miss out on an opportunity but you become a legend.
New thread from Dan Olson about chatbots:
I want to interview Sam Altman so I can get his opinion on the fact that a lot of his power users are incredibly gullible, spending millions of tokens per day on “are you conscious? Would you tell me if you were? How can I trust that you’re not lying about not being conscious?”
For the kinds of personalities that get really into Indigo Children, reality shifting, simulation theory, and the like chatbots are uncut Colombian cocaine. It’s the monkey orgasm button, and they’re just hammering it; an infinite supply of material for their apophenia to absorb.
Chatbots are basically adding a strain of techno-animism to every already cultic woo community with an internet presence, not a Jehovah that issues scripture, but more something akin to a Kami, Saint, or Lwa to appeal to, flatter, and appease in a much more transactional way.
Wellness, already mounting the line of the mystical like a pommel horse, is proving particularly vulnerable to seeing chatbots as an agent of secret knowledge, insisting that This One Prompt with your blood panel results will get ChatGPT to tell you the perfect diet to Fix Your Life
That Couple are in the news arís. surprisingly, the racist, sexist dog holds opinions that a racist, sexist dog could be expected to hold, and doesn’t think poor people should have more babies. He does want Native Americans to have more babies, though, because they’re “on the verge of extinction”, and he thinks of cultural groups and races as exhibits in a human zoo. Simone Collins sits next to her racist, sexist dog of a husband and explains how paid parental leave could lead to companies being reluctant to hire women (although her husband seems to think all women are good for us having kids).
This gruesome twosome deserve each other: their kids don’t.
yet again, you can bypass LLM “prompt security” with a fanfiction attack
https://hiddenlayer.com/innovation-hub/novel-universal-bypass-for-all-major-llms/
not Pivoting cos (1) the fanfic attack is implicit in building an uncensored compressed text repo, then trying to filter output after the fact (2) it’s an ad for them claiming they can protect against fanfic attacks, and I don’t believe them
I think unrelated to the attack above, but more about prompt hack security, so while back I heard people in tech mention that the solution to all these prompt hack attacks is have a secondary LLM look at the output of the first and prevent bad output that way. Which is another LLM under the trench coat (drink!), but also doesn’t feel like it would secure a thing, it would just require more complex nested prompthacks. I wonder if somebody is just going to eventually generalize how to nest various prompt hacks and just generate a ‘prompthack for a LLM protected by N layers of security LLMs’. Just found the ‘well protect it with another AI layer’ to sound a bit naive, and I was a bit disappointed in the people saying this, who used to be more genAI skeptical (but money).
Now I’m wondering if an infinite sequence of nested LLMs could achieve AGI. Probably not.
Now I wonder if your creation ever halts. Might be a problem.
(thinks)
(thinks)
I get it!
r/changemyview recently announced the University of Zurich had performed an unauthorised AI experiment on the subreddit. Unsurprisingly, there were a litany of ethical violations.
(Found the whole thing through a r/subredditdrama thread, for the record)
They targeted redditors. Redditors. (jk)
Ok but yeah that is extraordinarily shitty.
In commenting, we did not disclose that an AI was used to write comments, as this would have rendered the study unfeasible.
If you can’t do your study ethically, don’t do your study at all.
(found here:) O’Reilly is going to publish a book “Vibe Coding: The Future of Programming”
In the past, they have published some of my favourite computer/programming books… but right now, my respect for them is in free fall.
Early release. Raw and unedited.
Vibe publishing.
Just a standard story about a lawyer using GenAI and fucking up, but included for the nice list of services available
https://www.loweringthebar.net/2025/04/counsel-would-you-be-surprised.html
This is not by any means the first time ChatGPT, or Gemini, or Bard, or Copilot, or Claude, or Jasper, or Perplexity, or Steve, or Frodo, or El Braino Grande, or whatever stupid thing it is people are using, has embarrassed a lawyer by just completely making things up.
El Braino Grande is the name of my next
bandGenAI startupHank Green (of Vlogbrothers fame) recently made a vaguely positive post about AI on Bluesky, seemingly thinking “they can be very useful” (in what, Hank?) in spite of their massive costs:
Unsurprisingly, the Bluesky crowd’s having none of it, treating him as an outright rube at best and an unrepentant AI bro at worst. Needless to say, he’s getting dragged in the replies and QRTs - I recommend taking a look, they are giving that man zero mercy.
Just gonna go ahead and make sure I fact check any scishow or crash course that the kid gets into a bit more aggressively now.
I’m sorry you had to learn this way. Most of us find out when SciShow says something that triggers the Gell-Mann effect. Green’s background is in biochemistry and environmental studies, and he is trained as a science communicator; outside of the narrow arenas of biology and pop science, he isn’t a reliable source. Crash Course is better than the curricula of e.g. Texas, Louisiana, or Florida (and that was the point!) but not better than university-level courses.
Not the usual topic around here, but a scream into the void no less…
Andor season 1 was art.
Andor season 2 is just… Bad.
All the important people appear to have been replaced. It’s everything - music, direction, lighting, sets (why are we back to The Volume after S1 was so praised for its on-location sets?!), and the goddamn shit humor.
Here and there, a conversation shines through from (presumably) Gilroy’s original script, everything else is a farce, and that is me being nice.
The actors are still phenomenal.
But almost no scene seems to have PURPOSE. This show is now just bastardizing its own AESTHETICS.
What is curious though is that two days before release, the internet was FLOODED with glowing reviews of “one of the best seasons of television of all time”, “the darkest and most mature star wars has ever been”, “if you liked S1, you will love S2”. And now actual, post-release reviews are impossible to find.
Over on reddit, every even mildly critical comment is buried. Seems to me like concerted bot actions tbh, a lot of the glowing comments read like LLM as well.
Idk, maybe I’m the idiot for expecting more. But it hurts to go from a labor-of-love S1 which felt like an instruction manual for revolution, so real was what it had to say and critique, to S2 “pew pew, haha, look, we’re doing STAR WARS TM” shit that feels like Kenobi instead of Andor S1.
pic of tweet reply taken from r/ArtistHate. Reminded me of Saltman’s Oppenheimer tweet. Link to original tweet
image/tweet description
Original tweet, by @mark_k:
Forget “Black Mirror”, we need WHITE MIRROR
An optimistic sci-fi show about cool technology and hot it relates to society.
Attached to the original tweet are two images, side-to-side.
On the left/leading side is (presumably) a real promo poster for the newest black mirror season. It is an extreme close-up of the side of a person’s face; only one eye, part of the respective eyebrow, and a section of hair are visible. Their head is tilted ninety degrees upwards, with the one visible eye glazed over in a cloudy white. Attached to their temple is a circular device with a smiling face design, tilted 45 degrees to the left. Said device is a reference to the many neural interface devices seen throughout the series. The device itself is mostly shrouded in shadow, likely indicating the dark tone for which Black Mirror is known. Below the device are three lines of text: “Plug back in”/“A Netflix Series”/“Black Mirror”
On the right side is an LLM generated imitation of the first poster. It appears to be a woman’s 3/4 profile, looking up at 45 degrees. She is smiling, and her eyes are clear. A device is attached to her face, but not on her temple, instead it’s about halfway between her ear and the tip of her smile, roughly outside where her upper molars would be. The device is lit up and smiling, the smile aligned vertically. There are also three lines of text below the device, reading: “Stay connected”/“A Netflix Series”/“Black Mirror”
Reply to the tweet, by @realfuzzylegend:
I am always fascinated by how tech bros do not understand art. like at all. they don’t understand the purpose of creative expression.
Vacant, glassy-eyed, plastic-skinned, stamped with a smiley face… “optimistic”
I mean, if the smiley were aligned properly, it would be a poster for a horror story about enforced happiness and mandatory beauty standards. (E.g., “Number 12 Looks Just Like You” from the famously subtle Twilight Zone.) With the smiley as it is, it’s just incompetent.