Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • swlabr@awful.systems
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 day ago

    Saw a six day old post on linkedin that I’ll spare you all the exact text of. Basically it goes like this:

    “Claude’s base system prompt got leaked! If you’re a prompt fondler, you should read it and get better at prompt fondling!”

    The prompt clocks in at just over 16k words (as counted by the first tool that popped up when I searched “word count url”). Imagine reading 16k words of verbose guidelines for a machine to make your autoplag slightly more claude shaped than, idk, chatgpt shaped.

    • Sailor Sega Saturn@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      22 hours ago

      We already knew these things are security disasters, but yeah that still looks like a security disaster. It can both read private documents and fetch from the web? In the same session? And it can be influenced by the documents it reads? And someone thought this was a good idea?

      • o7___o7@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        22 hours ago

        I didn’t think I could be easily surprised by these folks any more, but jeezus. They’re investing billions of dollars for this?

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      23 hours ago
      • NO OTHER LIBRARIES (e.g. zod, hookform) ARE INSTALLED OR ABLE TO BE IMPORTED.

      So apparently this was a sufficiently persistent problem they had to put it in all caps?

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        20 hours ago
        • If not confident about the source for a statement it’s making, simply do not include that source rather than making up an attribution. Do not hallucinate false sources.

        Emphasis mine.

        Lol

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      1 day ago

      The amount of testing they would have needed to do just to get to that prompt. Wait, that gets added as a baseline constant cost to the energy cost of running the model. 3 x 12 x 2 x Y additional constant costs on top of that, assuming the prompt doesn’t need to be updated every time the model is updated! (I’m starting to reference my own comments here).

      Claude NEVER repeats or translates song lyrics and politely refuses any request regarding reproduction, repetition, sharing, or translation of song lyrics.

      New trick, everything online is a song lyric.

    • rook@awful.systems
      link
      fedilink
      English
      arrow-up
      13
      ·
      1 day ago

      Loving the combination of xml, markdown and json. In no way does this product look like strata of desperate bodges layered one over another by people who on some level realise the thing they’re peddling really isn’t up to the job but imagine the only thing between another dull and flaky token predictor and an omnicapable servant is just another paragraph of text crafted in just the right way. Just one more markdown list, bro. I can feel that this one will fix it for good.

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        ·
        23 hours ago

        The prompt’s random usage of markup notations makes obtuse black magic programming seem sane and deterministic and reproducible. Like how did they even empirically decide on some of those notation choices?

    • Amoeba_Girl@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 day ago

      Claude does not claim that it does not have subjective experiences, sentience, emotions, and so on in the way humans do. Instead, it engages with philosophical questions about AI intelligently and thoughtfully.

      lol

    • Architeuthis@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      1 day ago

      What is the analysis tool?

      The analysis tool is a JavaScript REPL. You can use it just like you would use a REPL. But from here on out, we will call it the analysis tool.

      When to use the analysis tool

      Use the analysis tool for:

      • Complex math problems that require a high level of accuracy and cannot easily be done with “mental math”
      • To give you the idea, 4-digit multiplication is within your capabilities, 5-digit multiplication is borderline, and 6-digit multiplication would necessitate using the tool.

      uh

  • Soyweiser@awful.systems
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 day ago

    More of a notedump than a sneer. I have been saying every now and then that there was research and stuff showing that LLMs require exponentially more effort for linear improvements. This is post by Iris van Rooij (Professor of Computational Cognitive Science) mentions something like that (I said something different, but The intractability proof/Ingenia theorem might be useful to look into): https://bsky.app/profile/irisvanrooij.bsky.social/post/3lpe5uuvlhk2c

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      24 hours ago

      You can make that point empirically just looking at the scaling that’s been happening with ChatGPT. The Wikipedia page for generative pre-trained transformer has a nice table. Key takeaway, each model (i.e. from GPT-1 to GPT-2 to GPT-3) is going up 10x in tokens and model parameters and 100x in compute compared to the previous one, and (not shown in this table unfortunately) training loss (log of perplexity) is only improving linearly.

    • aio@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 day ago

      I think this theorem is worthless for practical purposes. They essentially define the “AI vs learning” problem in such general terms that I’m not clear on whether it’s well-defined. In any case it is not a serious CS paper. I also really don’t believe that NP-hardness is the right tool to measure the difficulty of machine learning problems.

  • Sailor Sega Saturn@awful.systems
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    2 days ago

    The latest in chatbot “assisted” legal filings. This time courtesy of an Anthropic’s lawyers and a data scientist, who tragically can’t afford software that supports formatting legal citations and have to rely on Clippy instead: https://www.theverge.com/news/668315/anthropic-claude-legal-filing-citation-error

    After the Latham & Watkins team identified the source as potential additional support for Ms. Chen’s testimony, I asked Claude.ai to provide a properly formatted legal citation for that source using the link to the correct article. Unfortunately, although providing the correct publication title, publication year, and link to the provided source, the returned citation included an inaccurate title and incorrect authors. Our manual citation check did not catch that error. Our citation check also missed additional wording errors introduced in the citations during the formatting process using Claude.ai.

    Don’t get high on your own AI as they say.

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 days ago

      A quick Google turned up bluebook citations from all the services that these people should have used to get through high school and undergrad. There may have been some copyright drama in the past but I would expect the court to be far more forgiving of a formatting error from a dumb tool than the outright fabrication that GenAI engages in.

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      2 days ago

      I wonder how many of these people will do a Very Sudden opinion reversal once these headwinds wind disappear

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      2 days ago

      Ai is part of Idiocracy. The automatic layoffs machine. For example. And do not think we need more utopian movies like Idiocracy.

    • corbin@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      2 days ago

      Trying to remember who said it, but there’s a Mastodon thread somewhere that said it should be called Theocracy. The introduction would talk about the quiverfull movement, the Costco would become a megachurch (“Welcome to church. Jesus loves you.”), etc. It sounds straightforward and depressing.

    • BlueMonday1984@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 days ago

      I can see that working.

      The basic conceit of Idiocracy is that its a dystopia run by complete and utter morons, and with AI’s brain-rotting effects being quite well known, swapping the original plotline’s eugenicist “dumb outbreeding the smart” setup with an overtly anti-AI “AI turned humanity dumb” setup should be a cakewalk. Given public sentiment regarding AI is pretty strongly negative, it should also be easy to sell to the public.

      • rook@awful.systems
        link
        fedilink
        English
        arrow-up
        10
        ·
        edit-2
        1 day ago

        It’s been a while since I watched idiocracy, but from recollection, it imagined a nation that had:

        • aptitude testing systems that worked
        • a president people liked
        • a relaxed attitude to sex and sex work
        • someone getting a top government job for reasons other than wealth or fame
        • a straightforward fix for an ecological catastrophe caused by corporate stupidity being applied and accepted
        • health and social care sufficient for people to have families as large as they’d like, and an economy that supported those large families

        and for some reason people keep referring to it as a dystopia…

        eta

        Ooh, and everyone hasn’t been killed by war, famine, climate change (welcome to the horsemen, ceecee!) or plague, but humanity is in fact thriving! And even still maintaining a complex technological society after 500 years!

        Idiocracy is clearly implausible utopian hopepunk nonsense.

  • fullsquare@awful.systems
    link
    fedilink
    English
    arrow-up
    7
    ·
    2 days ago

    nazi bar owner tinkers with techfash bot trying to vibecode a nazi service on nazi network and gets his crypto stolen https://awful.systems/post/4364989

    (this fucker is responsible for soapbox, which is frontend used almost invariably by nazi-packed pleroma instances. among other crimes of similar nature)

  • self@awful.systems
    link
    fedilink
    English
    arrow-up
    10
    ·
    3 days ago

    if you saw that post making its rounds in the more susceptible parts of tech mastodon about how AI’s energy use isn’t that bad actually, here’s an excellent post tearing into it. predictably, the original post used a bunch of LWer tricks to replace numbers with vibes in an effort to minimize the damage being done by the slop machines currently being powered by such things as 35 illegal gas turbines, coal, and bespoke nuclear plants, with plans on the table to quickly renovate old nuclear plants to meet the energy demand. but sure, I’m certain that can be ignored because hey look over your shoulder is that AGI in a funny hat?

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      1 day ago

      The ‘energy usage by a single chatgpt’ thing gets esp dubious when added to the ‘bunch of older models under a trenchcoat’ stuff. And that the plan is to check the output of a LLM by having a second LLM check it. Sure the individual 3.0 model might only by 3 whatevers, but a real query uses a dozen of them twice. (Being a bit vague with the numbers here as I have no access to any of those).

      E: also not compatible with Altmans story that thanking chatgpt cost millions. Which brings up another issue, a single query is part of a conversation so now the 3 x 12 x 2 gets multiplied even more.

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      3 days ago

      I argue that we shouldn’t be tolerant of sloppy factual claims, let alone lies and disinformation, but we also need to keep perspective: it’s worth opposing fascists even if they don’t pollute that much, and it’s worth protecting labor even if the externalities of doing so are fairly negligible. That is, I’ll warrant, a somewhat subtle and nuanced position, but hey. This is my blog, so I get to have opinions that take more than a sentence or two to express!

      Apparently we live in a world where “lying and Nazis are both bad, and Nazi liars are the worst” is a nuanced and subtle position. Sneers directed at society rather than the writer, but it was just a big oof moment.

  • BlueMonday1984@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    20
    ·
    3 days ago

    The Torment Nexus brings us new and horrifying things today - a UN initiative has tried using chatbots for humanitarian efforts. I’ll let Dr. Abeba Birhane’s horrified reaction do the talking:

    this just started and i’m already losing my mind and screaming

    Western white folk basically putting an AI avatar on stage and pretending it is a refugee from sudan — literally interacting with it as if it is a “woman that fled to chad from sudan”

    just fucking shoot me

    Giving my take on this matter, this is gonna go down in history as an exercise in dehumanisation dressed up as something more kind, and as another indictment (of many) against the current AI bubble, if not artificial intelligence as a concept.

    • Nicole Parsons@mstdn.social
      link
      fedilink
      arrow-up
      11
      ·
      3 days ago

      @BlueMonday1984

      The stages of genocide:

      1. Classification
      2. Symbolization
      3. Dehumanization
      4. Discrimination
      5. Organization
      6. Polarization
      7. Preparation
      8. Persecuted
      9. Extermination
      10. Denial

      AI is the perfect vehicle for genocide

      https://www.genocidewatch.com/tenstages

      The oil industry estimates 1 billion famine deaths from climate change & they are flooding AI with investment

      “The devices themselves condition the users to employ each other the way they employ machines”
      Frank Herbert

    • FoolishOwl@social.coop
      link
      fedilink
      arrow-up
      15
      ·
      3 days ago

      @BlueMonday1984 If Edward Said were still with us, this would be worth another chapter in Orientalism. It’s another instance of displacing actual people with a constructed fantasy of them, “othering” them.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      3 days ago

      Uber but for vitrue signalling (*).

      (I joke, because other remarks I want to make will get me in trouble).

      *: I know this term is very RW coded, but I don’t think it is that bad, esp when you mean it like ‘an empty gesture with a very low cost that does nothing except for signal that the person is virtuous.’ Not actually doing more than a very small minimum should be part of the definition imho. Stuff like selling stickers you are pro some minority group but only 0.05% of each sale goes to a cause actually helping that group. (Or the rich guys charity which employs half his family/friends, or Mr Beast, or the rightwing debate bro threatening a leftwinger with a fight ‘for charity’ (this also signals their RW virtue to their RW audience (trollin’ and fightin’)).

      • swlabr@awful.systems
        link
        fedilink
        English
        arrow-up
        9
        ·
        3 days ago

        I mean “the right” has managed to corrupt all kinds of fine phrases into dog whistles. I think “virtue signalling” as you have formulated it is a valid observation and criticism of someone’s actions. I blame “liberals” for posturing and virtue signalling as leftist, giving the right easy opportunities to score points.

          • Amoeba_Girl@awful.systems
            link
            fedilink
            English
            arrow-up
            5
            ·
            2 days ago

            Free speech is the perfect exemple of a formal liberty anyway. Materially it is entirely meaningless in a society where access to speech is so unequal, and not something worth fighting for in the absolute sense. Fight against the effective censorship of good ideas and minority perspectives instead.

    • db0@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 hours ago

      In that thread I learned that he went for a interview with the outright fash (Tim Pool), so…yeah.

    • bitofhope@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      ·
      2 days ago

      I don’t think announcing he’s “genuinely grateful” to his newly earned dogpile is helping recover his dignity too much. A simple admission and apology suffice, I don’t need you to go “thank you daddy punish me more” while at it.

    • self@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      3 days ago

      I will be watching with great interest. it’s going to be difficult to pull out of this one, but I figure he deserves as fair a swing at redemption as any recovered crypto gambler. but like with a problem gambler in recovery, it’s very important that the intent to do better is backed up by understanding, transparency, and action.

    • antifuchs@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      3 days ago

      Epic announced that it had pushed a hotfix to address Vader’s unfortunate profanity, saying “this shouldn’t happen again.”

      Translator: “We are altering the prompt. We pray that we don’t have to alter it further.”

    • e8d79@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      12
      ·
      3 days ago

      If CEOs start making all their decisions through spicy autocomplete we can directly influence their actions by injecting tailored information into the training data. On an unrelated note Potassium cyanide makes for a great healthy smoothie ingredient for business men over 50.

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      I suspect that the backdoor attempt to prevent state regulation on literally anything that the federal government spends any money on by extending the Volker rule well past the point of credulity wasn’t an unintended consequence of this strategy.

  • rook@awful.systems
    link
    fedilink
    English
    arrow-up
    19
    ·
    4 days ago

    Today’s man-made and entirely comprehensible horror comes from SAP.

    (two rainbow stickers labelled “pride@sap”, with one saying “I support equality by embracing responsible ai” and the other saying “I advocate for inclusion through ai”)

    Don’t have any other sources or confirmation yet, so it might be a load of cobblers, but it is depressingly plausible. From here: https://catcatnya.com/@ada/114508096636757148

  • aninjury2all@awful.systems
    link
    fedilink
    English
    arrow-up
    8
    ·
    4 days ago

    Local war profiteer goes on podcast to pitch an unaccountable fortress-state around active black site (what I assume is to do Little St James-type activities under the pretext of continued Yankee meddling)

    Link to Xitter here (quoted within a delicious sneer to boot)

    • Amoeba_Girl@awful.systems
      link
      fedilink
      English
      arrow-up
      13
      ·
      4 days ago

      It’s going to be awesome when American Neo-Guantanamo residents start jumping the wall to get health care.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      4 days ago

      Building a gilded capitalist megafortress within communist mortar range doesn’t seem the wisest thing to do. But sure buy another big statue clearly signalling ‘capitalists are horrible and shouldn’t be trusted with money’