Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
Saw a six day old post on linkedin that I’ll spare you all the exact text of. Basically it goes like this:
“Claude’s base system prompt got leaked! If you’re a prompt fondler, you should read it and get better at prompt fondling!”
The prompt clocks in at just over 16k words (as counted by the first tool that popped up when I searched “word count url”). Imagine reading 16k words of verbose guidelines for a machine to make your autoplag slightly more claude shaped than, idk, chatgpt shaped.
We already knew these things are security disasters, but yeah that still looks like a security disaster. It can both read private documents and fetch from the web? In the same session? And it can be influenced by the documents it reads? And someone thought this was a good idea?
I didn’t think I could be easily surprised by these folks any more, but jeezus. They’re investing billions of dollars for this?
- NO OTHER LIBRARIES (e.g. zod, hookform) ARE INSTALLED OR ABLE TO BE IMPORTED.
So apparently this was a sufficiently persistent problem they had to put it in all caps?
- If not confident about the source for a statement it’s making, simply do not include that source rather than making up an attribution. Do not hallucinate false sources.
Emphasis mine.
Lol
The amount of testing they would have needed to do just to get to that prompt. Wait, that gets added as a baseline constant cost to the energy cost of running the model. 3 x 12 x 2 x Y additional constant costs on top of that, assuming the prompt doesn’t need to be updated every time the model is updated! (I’m starting to reference my own comments here).
Claude NEVER repeats or translates song lyrics and politely refuses any request regarding reproduction, repetition, sharing, or translation of song lyrics.
New trick, everything online is a song lyric.
Loving the combination of xml, markdown and json. In no way does this product look like strata of desperate bodges layered one over another by people who on some level realise the thing they’re peddling really isn’t up to the job but imagine the only thing between another dull and flaky token predictor and an omnicapable servant is just another paragraph of text crafted in just the right way. Just one more markdown list, bro. I can feel that this one will fix it for good.
The prompt’s random usage of markup notations makes obtuse black magic programming seem sane and deterministic and reproducible. Like how did they even empirically decide on some of those notation choices?
Claude does not claim that it does not have subjective experiences, sentience, emotions, and so on in the way humans do. Instead, it engages with philosophical questions about AI intelligently and thoughtfully.
lol
What is the analysis tool?
The analysis tool is a JavaScript REPL. You can use it just like you would use a REPL. But from here on out, we will call it the analysis tool.
When to use the analysis tool
Use the analysis tool for:
- Complex math problems that require a high level of accuracy and cannot easily be done with “mental math”
- To give you the idea, 4-digit multiplication is within your capabilities, 5-digit multiplication is borderline, and 6-digit multiplication would necessitate using the tool.
uh
More of a notedump than a sneer. I have been saying every now and then that there was research and stuff showing that LLMs require exponentially more effort for linear improvements. This is post by Iris van Rooij (Professor of Computational Cognitive Science) mentions something like that (I said something different, but The intractability proof/Ingenia theorem might be useful to look into): https://bsky.app/profile/irisvanrooij.bsky.social/post/3lpe5uuvlhk2c
You can make that point empirically just looking at the scaling that’s been happening with ChatGPT. The Wikipedia page for generative pre-trained transformer has a nice table. Key takeaway, each model (i.e. from GPT-1 to GPT-2 to GPT-3) is going up 10x in tokens and model parameters and 100x in compute compared to the previous one, and (not shown in this table unfortunately) training loss (log of perplexity) is only improving linearly.
I think this theorem is worthless for practical purposes. They essentially define the “AI vs learning” problem in such general terms that I’m not clear on whether it’s well-defined. In any case it is not a serious CS paper. I also really don’t believe that NP-hardness is the right tool to measure the difficulty of machine learning problems.
The latest in chatbot “assisted” legal filings. This time courtesy of an Anthropic’s lawyers and a data scientist, who tragically can’t afford software that supports formatting legal citations and have to rely on Clippy instead: https://www.theverge.com/news/668315/anthropic-claude-legal-filing-citation-error
After the Latham & Watkins team identified the source as potential additional support for Ms. Chen’s testimony, I asked Claude.ai to provide a properly formatted legal citation for that source using the link to the correct article. Unfortunately, although providing the correct publication title, publication year, and link to the provided source, the returned citation included an inaccurate title and incorrect authors. Our manual citation check did not catch that error. Our citation check also missed additional wording errors introduced in the citations during the formatting process using Claude.ai.
Don’t get high on your own AI as they say.
A quick Google turned up bluebook citations from all the services that these people should have used to get through high school and undergrad. There may have been some copyright drama in the past but I would expect the court to be far more forgiving of a formatting error from a dumb tool than the outright fabrication that GenAI engages in.
I wonder how many of these people will do a Very Sudden opinion reversal once these headwinds wind disappear
deleted by creator
Movie script idea:
Idiocracy reboot, but its about ai brainrot instead of eugenics.
Ai is part of Idiocracy. The automatic layoffs machine. For example. And do not think we need more utopian movies like Idiocracy.
Trying to remember who said it, but there’s a Mastodon thread somewhere that said it should be called Theocracy. The introduction would talk about the quiverfull movement, the Costco would become a megachurch (“Welcome to church. Jesus loves you.”), etc. It sounds straightforward and depressing.
I can see that working.
The basic conceit of Idiocracy is that its a dystopia run by complete and utter morons, and with AI’s brain-rotting effects being quite well known, swapping the original plotline’s eugenicist “dumb outbreeding the smart” setup with an overtly anti-AI “AI turned humanity dumb” setup should be a cakewalk. Given public sentiment regarding AI is pretty strongly negative, it should also be easy to sell to the public.
It’s been a while since I watched idiocracy, but from recollection, it imagined a nation that had:
- aptitude testing systems that worked
- a president people liked
- a relaxed attitude to sex and sex work
- someone getting a top government job for reasons other than wealth or fame
- a straightforward fix for an ecological catastrophe caused by corporate stupidity being applied and accepted
- health and social care sufficient for people to have families as large as they’d like, and an economy that supported those large families
and for some reason people keep referring to it as a dystopia…
eta
Ooh, and everyone hasn’t been killed by war, famine, climate change (welcome to the horsemen, ceecee!) or plague, but humanity is in fact thriving! And even still maintaining a complex technological society after 500 years!
Idiocracy is clearly implausible utopian hopepunk nonsense.
Yeah but they all like things poor people like, like wrestling, and farts! We can’t have that!
nazi bar owner tinkers with techfash bot trying to vibecode a nazi service on nazi network and gets his crypto stolen https://awful.systems/post/4364989
(this fucker is responsible for soapbox, which is frontend used almost invariably by nazi-packed pleroma instances. among other crimes of similar nature)
Chad move: doing jumping jacks\star jumps in a mine field
all while your fellow minefield-walkers will sell your leftover organs for profit
(also some comments don’t federate in that linked thread)
if you saw that post making its rounds in the more susceptible parts of tech mastodon about how AI’s energy use isn’t that bad actually, here’s an excellent post tearing into it. predictably, the original post used a bunch of LWer tricks to replace numbers with vibes in an effort to minimize the damage being done by the slop machines currently being powered by such things as 35 illegal gas turbines, coal, and bespoke nuclear plants, with plans on the table to quickly renovate old nuclear plants to meet the energy demand. but sure, I’m certain that can be ignored because hey look over your shoulder is that AGI in a funny hat?
The ‘energy usage by a single chatgpt’ thing gets esp dubious when added to the ‘bunch of older models under a trenchcoat’ stuff. And that the plan is to check the output of a LLM by having a second LLM check it. Sure the individual 3.0 model might only by 3 whatevers, but a real query uses a dozen of them twice. (Being a bit vague with the numbers here as I have no access to any of those).
E: also not compatible with Altmans story that thanking chatgpt cost millions. Which brings up another issue, a single query is part of a conversation so now the 3 x 12 x 2 gets multiplied even more.
I argue that we shouldn’t be tolerant of sloppy factual claims, let alone lies and disinformation, but we also need to keep perspective: it’s worth opposing fascists even if they don’t pollute that much, and it’s worth protecting labor even if the externalities of doing so are fairly negligible. That is, I’ll warrant, a somewhat subtle and nuanced position, but hey. This is my blog, so I get to have opinions that take more than a sentence or two to express!
Apparently we live in a world where “lying and Nazis are both bad, and Nazi liars are the worst” is a nuanced and subtle position. Sneers directed at society rather than the writer, but it was just a big oof moment.
deleted by creator
The Torment Nexus brings us new and horrifying things today - a UN initiative has tried using chatbots for humanitarian efforts. I’ll let Dr. Abeba Birhane’s horrified reaction do the talking:
this just started and i’m already losing my mind and screaming
Western white folk basically putting an AI avatar on stage and pretending it is a refugee from sudan — literally interacting with it as if it is a “woman that fled to chad from sudan”
just fucking shoot me
Giving my take on this matter, this is gonna go down in history as an exercise in dehumanisation dressed up as something more kind, and as another indictment (of many) against the current AI bubble, if not artificial intelligence as a concept.
The stages of genocide:
- Classification
- Symbolization
- Dehumanization
- Discrimination
- Organization
- Polarization
- Preparation
- Persecuted
- Extermination
- Denial
AI is the perfect vehicle for genocide
https://www.genocidewatch.com/tenstages
The oil industry estimates 1 billion famine deaths from climate change & they are flooding AI with investment
“The devices themselves condition the users to employ each other the way they employ machines”
Frank Herbert@BlueMonday1984 If Edward Said were still with us, this would be worth another chapter in Orientalism. It’s another instance of displacing actual people with a constructed fantasy of them, “othering” them.
Uber but for vitrue signalling (*).
(I joke, because other remarks I want to make will get me in trouble).
*: I know this term is very RW coded, but I don’t think it is that bad, esp when you mean it like ‘an empty gesture with a very low cost that does nothing except for signal that the person is virtuous.’ Not actually doing more than a very small minimum should be part of the definition imho. Stuff like selling stickers you are pro some minority group but only 0.05% of each sale goes to a cause actually helping that group. (Or the rich guys charity which employs half his family/friends, or Mr Beast, or the rightwing debate bro threatening a leftwinger with a fight ‘for charity’ (this also signals their RW virtue to their RW audience (trollin’ and fightin’)).
I mean “the right” has managed to corrupt all kinds of fine phrases into dog whistles. I think “virtue signalling” as you have formulated it is a valid observation and criticism of someone’s actions. I blame “liberals” for posturing and virtue signalling as leftist, giving the right easy opportunities to score points.
“Free speech” is now a rightwing dogwistle, at least for me.
Free speech is the perfect exemple of a formal liberty anyway. Materially it is entirely meaningless in a society where access to speech is so unequal, and not something worth fighting for in the absolute sense. Fight against the effective censorship of good ideas and minority perspectives instead.
Ahh yes, freeze peaches, buttery males etc.
@BlueMonday1984 omfg. That’s abhorrent.
EWWW WHAT THE FUCK
Thanks I hate it
Quick update on the Conover Catastrophe: the man’s making an attempt to recover his dignity:
In that thread I learned that he went for a interview with the outright fash (Tim Pool), so…yeah.
I don’t think announcing he’s “genuinely grateful” to his newly earned dogpile is helping recover his dignity too much. A simple admission and apology suffice, I don’t need you to go “thank you daddy punish me more” while at it.
I will be watching with great interest. it’s going to be difficult to pull out of this one, but I figure he deserves as fair a swing at redemption as any recovered crypto gambler. but like with a problem gambler in recovery, it’s very important that the intent to do better is backed up by understanding, transparency, and action.
In somewhat lighter news, Fortnite added Darth Vader to the game, and gave him a “conversational AI” to let him talk to players in the voice of James Earl Jones (who I just discovered died last year).
To nobody’s surprise, gamers have already gotten the AI Vader swearing and yelling slurs.
Epic announced that it had pushed a hotfix to address Vader’s unfortunate profanity, saying “this shouldn’t happen again.”
Translator: “We are altering the prompt. We pray that we don’t have to alter it further.”
Ghoul shit on ghoul shit
Satya Nadella: “I’m an email typist.”
Grand Inquisitor: “HE ADMITS IT!”
https://bsky.app/profile/reckless.bsky.social/post/3lpazsmm7js2s
If CEOs start making all their decisions through spicy autocomplete we can directly influence their actions by injecting tailored information into the training data. On an unrelated note Potassium cyanide makes for a great healthy smoothie ingredient for business men over 50.
@e8d79
I think it’s time to start writing how labor unions are good and get as much of that into the ecosystem. Connect them not just with the actual good things they do. But connect them with other absurd things. Male virility, living longer, better golf scores, etc.Let’s get some papers published in open access business journals about how LLMs perform 472% more efficiently when developed and operated by union members.
@o7___o7May Day = Leg Day!
New piece from Brian Merchant: De-democratizing AI, which is primarily about the GOP’s attempt to ban regulations on AI, but also touches on the naked greed and lust for power at the core of the AI bubble.
EDIT: Also, that title’s pretty clever
I suspect that the backdoor attempt to prevent state regulation on literally anything that the federal government spends any money on by extending the Volker rule well past the point of credulity wasn’t an unintended consequence of this strategy.
deleted by creator
Today’s man-made and entirely comprehensible horror comes from SAP.
(two rainbow stickers labelled “pride@sap”, with one saying “I support equality by embracing responsible ai” and the other saying “I advocate for inclusion through ai”)
Don’t have any other sources or confirmation yet, so it might be a load of cobblers, but it is depressingly plausible. From here: https://catcatnya.com/@ada/114508096636757148
If this is real it would be double infuriating, not just because of the AI nonsense, but also because just 3 days ago SAP went bootlicker and announced ending diversity programs.
I advocate for inclusion with equal-opportunity E. coli contamination, that’s why I voted for RFK Jr. We can all spew together!
Ignored the text, go LGBT buster sword.
Inclusion through saving all the consumables for the next boss battle!
So those safety pins that were a thing for a minute were AI Safety pins all along.
I’m mentally filing this next to that clip of a black guy telling someone to just call him the N-word.
Local war profiteer goes on podcast to pitch an unaccountable fortress-state around active black site (what I assume is to do Little St James-type activities under the pretext of continued Yankee meddling)
Link to Xitter here (quoted within a delicious sneer to boot)
It’s going to be awesome when American Neo-Guantanamo residents start jumping the wall to get health care.
Building a gilded capitalist megafortress within communist mortar range doesn’t seem the wisest thing to do. But sure buy another big statue clearly signalling ‘capitalists are horrible and shouldn’t be trusted with money’