Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Semi-obligatory thanks to @dgerard for starting this.)
Buckle up humans; because humanity’s last exam just dropped: https://lastexam.ai/ (Hacker News discussion). May the odds be ever in your favor.
Edit: Per NyTimes, whom I hate, they were apparently trying to avoid an over-dramatic name. Amazing:
i only want to notice that the example chemistry question has two steps out of three that are very similar to last image in wikipedia article https://en.wikipedia.org/wiki/Electrocyclic_reaction (question explicitly mentions that it is electrocyclic reaction and mentions the same class of natural product)
e: the exact reaction scheme that is answer to that question is in article linked just above that image. taking last image from wiki article and one of schemes from cited article gives the exact same compound as in question, and provides answer. considering how these spicy autocomplete rainforest incinerators work, this sounds like some serious ratfucking, right? you don’t even have to know how this all works to get that and it’s an isolated and a bit obscure subsubfield
You think people would secretly submit easy questions just for the reward money, and that since the question database is so big and inscrutable no one bothered to verify one way or another? No, that could never happen.
that question was sorta related to research done previously by that uploader (not anonymous; how many noahs b. are professors at stanford?) and there’s 15 of them, which makes me suspect that he might have just loaded some exam questions for undergrads there
well, it’s not the most obvious thing but not because it’s easy, it’s because it’s almost a trivia, a sort of thing you can see once in textbook and then never use it ever for anything and that doesn’t really connects readily to anything else, most of the time. i haven’t done electrocyclic reaction once in my entire phd programme, and last time i’ve seen them was in second year ochem course. these kinds of reactions are not very controllable or clean, synthesis of precursors looks like a major PITA, precursors would probably have to be kept in freezer under argon for maybe days before they decompose, and introduction of any modifications requires you to redo multistep synthesis, and then it might fail to work. i also suspect that this exact example might be in some undergrad textbook verbatim, and it will be in scihub pdfs at any rate. it’s also kinda old stuff with research starting in 60s
Oh yeah I meant “easy” in the sense of “maybe it can get it right from sheer chance by pattern matching training data from the interwebs”
i’d say it was made easy for machines in that wisdom woodchipper would “randomly” stumble upon correct answer while scraping everything related to more general topic, while it’s made harder for humans because it’s rather obscure
oh cool, the logo’s just a barely modified sparkle emoji so you know it’s horseshit, and it’s directly funded by Scale AI and a Rationalist thinktank so the chances the models weren’t directly trained on the problem set are vanishingly thin. this is just the FrontierMath grift with new, more dramatic, paint.
e: also, slightly different targeting — FrontierMath was looking to grift institutional dollars, I feel. this one’s designed to look good in a breathless thinkpiece about how, I dunno…
yeah, whatever the fuck they think this means. this one’s designed to be talked about, to be brought up behind closed doors as a reason why your pay’s being cut. this is vile shit.
… Oh so it’s a training dataset, got you.
Humanity’s last exam or AI grifters first bankrupcy.
just mark C for every answer if you don’t get it, that’s what the State of California taught me in elementary school