Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • corbin@awful.systems
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 minute ago

    Dan Gackle threatens to quit HN over their reluctance to condemn an act of violence towards Sam Altman:

    I don’t think I’ve ever seen a thread this bad on Hacker News. The number of commenters justifying violence, or saying they “don’t condone violence” and then doing exactly that, is sickening and makes me want to find something else to do with my life—something as far away from this as I can get. I feel ashamed of this community.

    Gackle’s ashamed of people not wanting to protect Altman. Curiously, he doesn’t seem ashamed of openly allowing people with nicknames ending in “88” to post antisemitism, nor of allowing multiple crusty conservatives like John Nagle and Walter Bright to post endorsements of violence against the homeless and queer, nor of allowing posters like rayiner to port entirely foreign flavors of racism like the Indian caste system into their melting pot of bigotry. This subthread takes him to task for it:

    Frankly people calling out a post from a billionaire is a good thing. You would have to be terminally detached from reality to not see how all these festering issues - wealth inequality, injustice, cost of living, future employment etc etc - are starting to come to a head which would cause people to feel something - frustrated, angry, wrathful.

    The rest of that subthread involves Dan demonstrating that he is, in fact, terminally detached from reality. Anyway, I fully endorse Gackle fucking off and buying a farm. While he’s at it, he should consider following the advice of this reply:

    Maybe it’s time to pack it in? I don’t just mean you, I mean that maybe this site has kinda run its course.

  • BurgersMcSlopshot@awful.systems
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    2 hours ago

    This NPR article opens with a banger of a line:

    In the past few months, AI models have gone from producing hallucinations to becoming effective at finding security flaws in software, according to developers who maintain widely used cyber infrastructure.

    The things still fucking hallucinate, it’s not a feature that’s separable from the model.

  • scruiser@awful.systems
    link
    fedilink
    English
    arrow-up
    8
    ·
    15 hours ago

    Rationalist Infighting!

    tldr; one of the MIRI aligned rationalist (Rob Bensinger) complained about how EA actually increased AI-risk long-run by promoting OpenAI and then Anthropic. Scott Alexander responded aggressively, basically saying they are entirely wrong and also they are bad at public communications! Various lesswrongers weigh in, seemingly blind to irony and hypocrisy!

    Some highlights from the quotes of the original tweets and the lesswronger comments on them:

    • Scott Alexander tries blaming Eliezer for hyping up AI and thus contributing to OpenAI in the first place. Just a reminder, Scott is one of the AI 2027 authors, he really doesn’t have room to complain about rationalist creating crit-hype.

    • Scott Alexander tries claiming SBF was a unique one off in the rationalist/EA community! (Anthropic’s leadership has been called out on the EA forums and lesswrong for a similar pattern of repeated lying)

    • Rob Bensinger is indirectly trying to claim Eliezer/MIRI has been serious forthright honest commentators on AI theory and policy, as opposed to Open-Phil/EA/Anthropic which have been “strategic” with their public communication, to the point of dishonesty.

    • habryka is apparently on the verge of crashing out? I can’t tell if they are planning on just quitting twitter or quitting their attempts at leadership within the rationalist community. Quitting twitter is probably a good call no matter what.

    • Load of tediously long posts, mired with that long-winded rationalist way of talking, full of rationalist in-group jargon for conversations and conflict resolution

    • Disagreement on whether Ilya Sutskever’s $50 billion dollar startup is going to contribute to AI safety or just continue the race to AGI.

    • Arguments over who is with the EAs vs. Open Philanthropy vs. MIRI!

    • Argument over the definition of gaslighting!

    To be clear, I agree with the complaints about EA and Anthropic, I just also think MIRI has its own similar set of problems. So they are both right, all of the rationalists are terrible at pursing their alleged nominal goals of stopping AI Doom.

    I did sympathize with one lesswronger’s comment:

    More than any other group I’ve been a part of, rationalists love to develop extremely long and complicated social grievances with each other, taking pages and pages of text to articulate. Maybe I’m just too stupid to understand the high level strategic nuances of what’s going on – what are these people even arguing about? The exact flavor of comms presented over the last ten years?

    • CinnasVerses@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      11 hours ago

      Bonus race pseudoscience quoted by No77e!

      There is a phenomenon in which rationalists sometimes make predictions about the future, and they seem to completely forget their other belief that we’re heading toward a singularity (good or bad) relatively soon. It’s ubiquitous, and it kind of drives me insane. Consider these two tweets:

      Richard Ngo @RichardMCNgo: Hypothesis: We’ll look back on mass migration as being worse for Europe than WW 2 was. … high-trust and honogeneous … ethno-religious fractures

      Liv Boeree: Would not be surprised if it turns out that everyone outsourcing their writing to LLms will have a similar or worse effect on IQ aslead piping in the long run

      (he shares these tweets as photos, I ain’t working harder to transcribe them or using a chatbot)

    • CinnasVerses@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      11 hours ago

      Old Twitter was terrible for people’s souls. I can only imagine what it is like now that the well-meaning professionals are gone and catturd and Wall Street Apes are the leading accounts.

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        3
        ·
        26 minutes ago

        Old Twitter was terrible for people’s souls.

        It almost makes me feel sorry for the way the rationalists are still so attached to it. But they literally have two different forums (lesswrong and the EA forum), so staying on twitter is entirely their choice, they have alternatives.

        Fun fact! Over the past few years, Eliezer has deliberately cut his lesswrong posting in favor of posting on twitter, apparently (he’s made a few comments about this choice) because lesswrong doesn’t uncritically accept his ideas and nitpicks them more than twitter does. (How bad do you have to be to not even listen to critique on a website that basically loves you and take your controversial foundational premises seriously?)

      • istewart@awful.systems
        link
        fedilink
        English
        arrow-up
        3
        ·
        8 hours ago

        I’m willing to go out on a limb and say that short-form social media in general (Twitter and imitators, Instagram, TikTok) is essentially a failed set of media. But I’ll concede that’s like cramming a Zyn pouch in my mouth while making fun of a guy chain-smoking Marlboros.

  • YourNetworkIsHaunted@awful.systems
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 day ago

    Found an interesting take on YouTube, of all places. Her argument can be summarized (with high compression losses) as “AI companies and technologies are bad for basically all the reasons that non-cultist critics say, but trying to shame and argue people out of using them entirely is less effective than treating them as a normal tool with limitations and teaching people how to limit the harm.” She makes the analogy to drug policy.

    I think she makes a very compelling argument, and I’m still digesting it a bit because I definitely had the knee-jerk rejection as an insider shill, but especially towards the end as she talks about how the AI industry targets low-literacy users as ideal customers (because the more you know about it the less you’re likely to actually use them) I found myself agreeing more than not. I do wish she had addressed the dangers of cognitive offloading more, since being mindful of which tasks you’re letting the computer do for you is pretty significant part of minimizing those harms, especially for students and some professionals who face a strong incentive to just coast by on slop if they can get away with it.

    • Evinceo@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      24 hours ago

      I feel like there’s a difference between alcohol and drugs, something people can make in their back yard and AI which requires a first world country’s entire economy to be oriented towards it to function… a difference in what we should be required to accept.

      I don’t buy the general argument about shame either. We teach children to shit in toilets and not sidewalks. I see rampant AI use as just another form of disgusting public indecency and the faster we bring shame in to remedy it the better.

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        4
        ·
        8 hours ago

        I don’t disagree about the massive costs necessarily associated with thia industry. Even the smaller and lighter models she mentions only exist because of the massive fuckers. At the same time, I think those arguments are for the realm of public policy more than individual choice to use chatbots or not. We’ve talked at length here over the last year or so about how the economics of the bubble are driven largely by a broken B2B SaaS pipeline that separates purchasing decisions from actually having to use the products and by an investment capital sector desperately trying to recapture the glory days of the pre-2008 omnibubble and throwing obscene amounts of money at anything with the right narrative regardless of the numbers. I feel like that keeps happening regardless of how many individual users fall for the hype and make it part of their normal workflows.

        I feel like the analogy to the drug trade is still pretty relevant given the violence and predation that the black market pretty much inevitably attracts and sustains. Like, maybe you know a guy who has his own grow op or whatever, but cocaine and heroin money is going through the cartels at some point in the chain and they’re going to use some portion of it for bullets that end up in some journalist’s kids or something. The downstream harms are massive even if the drug industry could theoretically avoid them in ways the AI industry can’t, but any given individual user’s contribution to them is incredibly minor and given the addictive and self-destructive nature of the product it’s both more humane and more effective to treat them as a victim of a broken world that (falsely) offered this as a step up. While I don’t think we should allow slop to invest every forum any more than addicts should be allowed to shoot up on every corner, I think that if shaming makes people less likely to acknowledge that they’re going down a dead-end road and reach out to their communities and support networks for help addressing the root of what drove them to these maladaptive antisolutions in the first place then shaming is making things worse, not better.

        Also as the father of a small child I can unfortunately say from recent personal experience that shaming, be it public or private, is far less effective as a means of motivating behavioral change than we want it to be, even for things as basic as not shitting on the goddamn lawn.

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      16
      ·
      1 day ago

      I think that’s 100% correct and also it’s year 3 of this nonsense and I cannot be fucked. My response to genAI in any context now is to scream and start doing jumping jacks.

      Imagine the drug policy context but then also half of your colleagues are doing meth every day every time you see them, people say shit like “everyone does meth, those that say they don’t are lying”, and meth is a trillion-dollar industry that has been telling you “meth is the future” for years. You’d be much less inclined to argue calmly against meth and much more inclined to start screaming and jumping.

    • jaschop@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      1 day ago

      Sounds kind of like the Baldur Bjarnason strategy but for your coworkers instead of your boss.

      I can see the value of someone with a critical understanding diving into the technology, so they can talk others down from the ledge.

      But you also need the social pressure to maintain some slop-free spaces. Not everyone can be asked to accomodate recovering slopaholics.

    • gerikson@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 day ago

      Kind of a pseudo-sneer, author is writing a

      blog on machine learning engineering, compound AI systems, search and information retrieval, and recsys — exploring machine learning, LLM agents, and data science insights from startups to enterprises.

      Here’s the discussion on the red site: https://lobste.rs/s/nmhkdl/ai_great_leap_forward Plenty of people suspect the text being LLM generated. Pangram disagrees, fwiw.

      I do think there’s some interesting ideas about how humans will “defend” themselves from being replaced by bots, and that the critical info in a company is seldom in the source code, but in the customer relationships, sales etc.

      • Evinceo@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 day ago

        It seems vaguely AI flavored to me inasmuch as it’s using contrasts too much (it’s not x it’s y) and it’s way too verbose. Also it’s obviously wrong at least in my experience, middle managers aren’t the sparrows, individual contributors (especially juniors) are.

        Maybe that’s just a symptom of a person reading too much AI text and thinking a good tweet would make a great substack.

        • YourNetworkIsHaunted@awful.systems
          link
          fedilink
          English
          arrow-up
          2
          ·
          8 hours ago

          Yeah, they lost me at the middle managers bit too. In my experience your manager is probably the one pushing the metrics to show their team’s contributions to the knowledge base that is feeding into the AI model that’s replacing them. They’re already creatures of the bureaucracy and are more likely to try and fight each other over the few remaining roles that will exist after the majority of their teams are replaced with the confabulatron, rather than be concerned about their own replacements. After all, their job stops existing because their team got downsized, but their time in that job may be dependent on their enthusiastic participation in the process that leads there.

  • BurgersMcSlopshot@awful.systems
    link
    fedilink
    English
    arrow-up
    10
    ·
    2 days ago

    Work wants to add that new whiz-bang agentic AI into a scheduling service that I have been tasked with building, but in the dumbest way possible kind of similar to the Jet’s text-a-pizza-order thing that worked like shit. I need to find an entirely new profession, everyone in software now is fucking deranged.

    • Evinceo@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      24 hours ago

      I need to find an entirely new profession, everyone in software now is fucking deranged.

      Mood

    • Sailor Sega Saturn@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      ·
      2 days ago

      It’s bad for me too.

      I’m trying to hang in there until I get some healthcare stuff taken care of over the next year or two but it is getting increasingly difficult. Most of the the good people at my job have been driven out, quit, or been poached by other (AI) companies.

      By this point a majority of the programmers at my job (or at least the one’s most active on the mailing lists) are LLM true believers who think that the end times are near. My management chain has explicitly said that LLM programming is required, and that a subsequent increase in “productivity” is expected with it. My department got renamed to something with “AI” in the name. I constantly field questions from people who want me to read a screen full of LLM nonsense, or who push back when I tell them something claiming that the chatbot said differently.

      There’s always some frantic push to adopt “MCP” or “Skills” or whatever the next fad will be without any guidance as to how or why. If I ignore this I get nastygrams from my manager.

      And at my last doctor visit I had elevated blood pressure :)

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        ·
        2 days ago

        and that a subsequent increase in “productivity” is expected with it.

        Oh no… they def will blame the users before blaming the faulty tools. Hope you will not be the one who gets blamed as a wrecker or something when the eventual increase isn’t there (or other metrics fall off a cliff).

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 days ago

      Up next, when the first agent fails, implement an agent that checks the other agent. Both of these need agents to check for malicious inputs of course. And translation agents.

  • picklefactory@awful.systems
    link
    fedilink
    English
    arrow-up
    6
    ·
    2 days ago

    I run an email server for myself and every once in a while the UCE starts leaking through until I have a few training examples to feed it. In the last couple weeks I noticed that basically all of the escapees look like fancy Claude output for telling me that I should be enticed by Costco gift cards and free chicken sandwiches.

    What I suppose this means is that if you use these tools to generate material in the same snappy variety of output template, “but seriously”, nonetheless you will eventually reach aesthetic convergence with meaningless spam. Is there a term for this yet? “Slop-ratchet” is the one that sprang immediately to mind but I am sure someone else noticed this tendency long before I did.

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    6
    ·
    2 days ago

    Circular at work states that the standard laptop we get from Dell has increased in price by 50% so they’re looking for alternatives.

    • mlen@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 days ago

      Glad that I did the major upgrades in 2024, hopefully they will outlast this bullshit

      • gerikson@awful.systems
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        yeah my kid had to get a gaming PC for school (gamedev) and managed to snag a decent rig before prices went parabolic

  • samvines@awful.systems
    link
    fedilink
    English
    arrow-up
    12
    ·
    edit-2
    3 days ago

    Claude Mythos… I’m already sick of hearing about it. The self-imposed critihype is insane.

    A friend just pointed out that Anthropic are making all this big noise about having an AI that is “too good” at finding bugs and security problems 1 week after the source code for one of their flagship products was leaked to the public and was found to be riddled with security holes… Why would they not use it themselves?

    Same as the vague markdown files skills that are supposedly going to make all SaaS redundant and finally kill off all the COBOL running on mainframes that checks notes IBM have spent hundreds of thousands of man hours trying to kill over the last 3-4 decades

    Honestly fuck this shit. Bunch of absolute clowns 🤡 🤡 🤡

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      3 days ago

      So, they are planning to use an ai to fix the sec bugs that their ai generates? Good hussle, if a bit obvious.

      • antifuchs@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        ·
        3 days ago

        Is it their next model that tbey swear isn’t vaporware but no! It is too dangerous to release into the world because it’ll find too much insecure code or whatever.

      • lurker@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        ·
        3 days ago

        Anthropic’s latest model that they haven’t released to the public yet since they’re worried its gonna fuck up cybersecurity this thread goes over it a bit

        • YourNetworkIsHaunted@awful.systems
          link
          fedilink
          English
          arrow-up
          5
          ·
          edit-2
          2 days ago

          XCancel link for those of us sick of being badgered to sign up/in

          On a more productive note, this feels likely to be tied in with the usual issues of AI sycophancy re: false positive rate. If you ask the model to tell you about security vulnerabilities, it’s never going to tell you there aren’t any, any more than existing scanners will. When I worked for F5 it was not uncommon to have to go down a list of vulnerabilities that someone’s scanner turned out and figure out whether they were actually something that needed mitigation that could be applied on our box, something that needed to be configured somewhere else in the network (usually on their actual servers) or (most commonly) a false positive, e.g. “your software version would be vulnerable here, which is why it flagged, but you don’t have the relevant module activated and if an attacker is able to modify your system to enable it you’re already compromised to a far greater degree than this would allow.” That was with existing tools that weren’t trying to match a pattern and complete a prompt.* Given that we’ve seen the shitshow that is Claude Code I think it’s pretty clear they’re getting high on their own supply and this announcement ought be catnip for black hats.

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        I can’t validate any of the internal stuff, but the attitude of layering manual solutions and mitigation scripts on top of bad design choices and praying you could keep building the next bit of the bridge as the last one collapsed underneath you would explain a lot of experiences I had supporting systems running on Azure. The level of weird “Azure just does that sometimes” cases and the lack of ability for their support to actually provide insight was incredibly frustrating. I think I probably ended up providing a couple of automatic recovery scripts for people to use inside their F5 guests because we never could find an actual explanation for the errors they were getting, and the node issues they describe could have explained the bursts of Azure cases that would come in some days.

      • V0ldek@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        ·
        3 days ago

        The only thing I can personally confirm is the JIT permissions thing. I didn’t work in the Core Azure stuff so I can’t verify the rest, but none of it is unbelievable…

  • blakestacey@awful.systems
    link
    fedilink
    English
    arrow-up
    18
    ·
    4 days ago

    LLM capabilities have not improved at all in terms of producing meaningful science in the last year or two, but their ability to produce meaningless science that looks meaningful has wildly improved. I am concerned that this will present serious problems for the future of science as it becomes impossible to find the actual science in a sea of AI slop being submitted to journals.

    https://www.reddit.com/r/Physics/comments/1s19uru/gpt_vs_phd_part_ii_a_viewer_reached_out_with_a/

    • blakestacey@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      ·
      4 days ago

      “Scientists invented a fake disease. AI told people it was real”

      https://www.nature.com/articles/d41586-026-01100-y

      But if, in the past 18 months, you typed those symptoms into a range of popular chatbots and asked what was wrong with you, you might have got an odd answer: bixonimania.

      The condition doesn’t appear in the standard medical literature — because it doesn’t exist. It’s the invention of a team led by Almira Osmanovic Thunström, a medical researcher at the University of Gothenburg, Sweden, who dreamt up the skin condition and then uploaded two fake studies about it to a preprint server in early 2024. Osmanovic Thunström carried out this unusual experiment to test whether large language models (LLMs) would swallow the misinformation and then spit it out as reputable health advice. “I wanted to see if I can create a medical condition that did not exist in the database,” she says.

      The problem was that the experiment worked too well. Within weeks of her uploading information about the condition, attributed to a fictional author, major artificial-intelligence systems began repeating the invented condition as if it were real.

      • blakestacey@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        4 days ago

        This actually gives me hope that we can poison the datasets pertaining to any sufficiently narrow technical topic.

    • lagrangeinterpolator@awful.systems
      link
      fedilink
      English
      arrow-up
      15
      ·
      4 days ago

      I’ve seen this story play out in software engineering: people were very impressed when the AI does unexpectedly well in one out of 50 attempts on an easy task, and so people decided to trust it for everything and turn their codebases into disasters. There was no great wave of new high-quality software. Instead, the only real result was that existing software has become far more buggy and insecure.

      Now we have people using AI in science and math because it was impressive in random demonstrations of solving math problems. I now have friends asking me why I’m not using AI, and also saying that AI will be better than all mathematicians in 30 years or whatever. Do you really think I refuse to use AI out of ignorance? No, I know too much about it! I have seen the same story play out in software engineering, and what makes this any different?