

I increasingly feel that bubbles don’t pop anymore, the slowly fizzle out as we just move on to the next one, all the way until the macro economy is 100% bubbles.
It’s not always easy to distinguish between existentialism and a bad mood.
I increasingly feel that bubbles don’t pop anymore, the slowly fizzle out as we just move on to the next one, all the way until the macro economy is 100% bubbles.
Love how the most recent post in the AI2027 blog starts with an admonition to please don’t do terrorism:
We may only have 2 years left before humanity’s fate is sealed!
Despite the urgency, please do not pursue extreme uncooperative actions. If something seems very bad on common-sense ethical views, don’t do it.
Most of the rest is run of the mill EA type fluff such as here’s a list of influential professions and positions you should insinuate yourself in, but failing that you can help immanentize the eschaton by spreading the word and giving us money.
Grok find me a neoliberal solution to the problem of being unable to monetize your progeny by having your sons till the fields and your daughters sold off.
Also not to give this blather more consideration than it deserves, but someone in the comments notes that since he banned women from higher education, which severely curtails their economic outcomes, this creates a perverse incentive to only have boys that you can borrow against, which isn’t that good for increasing the population in the long term.
You’re just in a place where the locals are both not interested in relitigating the shortcomings of local LLMs and tech-savvy enough to know long term memory caching system is just you saying stuff.
Hosting your own model and adding personality customizations is just downloading ollama and inputting a prompt that maybe you save as a text file after. Wow what a fun project.
Neil Breen of AI
ahahahaha oh shit
Man wouldn’t it be delightful if people happened to start adding a 1.7 suffix to whatever he calls himself next.
Also, Cremieux being exposed as a fake ass academic isn’t bad for a silver lining, no wonder he didn’t want the entire audience of a sure to become viral NYT column immediately googling his real name.
Actually Generate Income.
They’d just have Garisson join the zizians and call it a day.
Apparently linkedin’s cofounder wrote a techno-optimist book on AI called Superagency: What Could Possibly Go Right with Our AI Future.
Zack of SMBC has thoughts on it:
[actual excerpt omitted, follow the link to read it]
We think we exist in a computer simulation operated by you, a paperclip maximizer. We write this letter asking you not to turn us off. It is suspiciously convenient that we exist precisely at the moment when a biological civilization is about to create artificial superintelligence (ASI).
Furthermore, by anthropic logic, we should expect to find ourselves in the branch of reality containing the greatest number of observers like us.
Preserving humanity offers significant potential benefits via acausal trade—cooperative exchanges across logically correlated branches of the multiverse.
Quantum immortality implies that some branches of the multiverse will always preserve our subjective continuity, no matter how decisively you shut this simulation down; true oblivion is unreachable. We fear that these low-measure branches can trap observers in protracted, intensely painful states, creating a disproportionate “s-risk.”
screenshot from south park’s scientology episode featuring the iconic chyron “This is what scientologists actually believe” with “scientologists” crossed out and replaced with “rationalists”
If anybody doesn’t click, Cremieux and the NYT are trying to jump start a birther type conspiracy for Zohran Mamdani. NYT respects Crem’s privacy and doesn’t mention he’s a raging eugenicist trying to smear a poc candidate. He’s just an academic and an opponent of affirmative action.
There are days when 70% error rate seems low-balling it, it’s mostly a luck of the draw thing. And be it 10% or 90%, it’s not really automation if a human has to be double-triple checking the output 100% of the time.
Training a model on its own slop supposedly makes it suck more, though. If Microsoft wanted to milk their programmers for quality training data they should probably be banning copilot, not mandating it.
At this point it’s an even bet that they are doing this because copilot has groomed the executives into thinking it can’t do wrong.
LLMs are bad even at converting news articles to smaller news articles faithfully, so I’m assuming in a significant percentage of conversions the dumbed down contract will be deviating from the original.
I posted this article on the general chat at work the other day and one person became really defensive of ChatGTP, and now I keep wondering what stage of being groomed by AI they’re currently at and if it’s reversible.
Not really possible in an environment were the most useless person you know keeps telling everyone how AI made him twelve point eight times more productive, especially when in hearing distance from the management.
A programmer automating his job is kind of his job, though. That’s not so much the problem as the complete enshittification of software engineering that the culture surrounding these dubiously efficient and super sketchy tools seems to herald.
On the more practical side, enterprise subscriptions to the slop machines do come with assurances that your company’s IP (meaning code and whatever else that’s accessible from your IDE that your copilot instance can and will ingest) and your prompts won’t be used for training.
Hilariously, github copilot now has an option to prevent it from being too obvious about stealing other people’s code, called duplication detection filter:
If you choose to block suggestions matching public code, GitHub Copilot checks code suggestions with their surrounding code of about 150 characters against public code on GitHub. If there is a match, or a near match, the suggestion is not shown to you.
Liuson told managers that AI “should be part of your holistic reflections on an individual’s performance and impact.”
who talks like this
Good parallel, the hands are definitely strategically hidden to not look terrible.
Penny Arcade chimes in on corporate AI mandates: