

Not gonna lie, it’s fun reading those reddit posts from vibe coders, squealing like stuck pigs because their heavily subsidized code extruder stopped working.
Not gonna lie, it’s fun reading those reddit posts from vibe coders, squealing like stuck pigs because their heavily subsidized code extruder stopped working.
deleted by creator
What I don’t understand is how these people didn’t think they would be caught, with potentially career-ending consequences? What is the series of steps that leads someone to do this, and how stupid do you need to be?
What makes this worse than the financial crisis of 2008 is that you can’t live in a GPU once the crash happens.
Apparently the NYT hit-piece’s author, Benjamin Ryan, is a subscriber to Jordan Lasker’s (Cremieux’s) substack.
When this was first posted I too was curious about the book series. It appears that nearly every book in the series is authored by academics affiliated with Indian universities. Modi’s government has promoted and invested heavily in AI.
I call bullshit on Daniel K. That backtracking is so obviously ex-post-facto cover-your-ass woopsie-doopsie. Expect more of it as we get closer to whatever new “median” he has suddenly claimed. It’s going to be fun to watch.
I have no doubt that a chatbot would be just as effective at doing Liuson’s job, if not moreso. Not because chatbots are good, but because Liuson is so bad at her job.
That thread is wild. Nate proposes techniques to get his kooky beliefs taken more seriously. Others point out that those very same techniques counterproductively pushed people to into the e/acc camp. Nate deletes those other people’s comments. How rationalist of him!
People are often overly confident about their imperviousness to mental illness. In fact I think that --given the right cues – we’re all more vulnerable to mental illness than we’d like to think.
Baldur Bjarnason wrote about this recently. He talked about how chatbots are incentivizing and encouraging a sort of “self-experimentation” that exposes us to psychological risks we aren’t even aware of. Risks that no amount of willpower or intelligence will help you avoid. In fact, the more intelligent you are, the more likely you may be to fall into the traps laid in front of you, because your intelligence helps you rationalize your experiences.
ChatGPT tells prompter that he’s brilliant for his literal “shit on a stick” business plan.
Not surprised to find Sabine in the comments. She’s been totally infected by the YouTube algorithm and captured by her new culture-war-mongering audience. Kinda sad, really.
We should be trying to stop this from coming to pass with the urgency we would try to stop a killer asteroid from striking Earth. Why aren’t we?
Wait, what are we trying to stop from coming to pass? Superintelligent AIs? Either I’m missing his point, or he really agrees with the doomers that LLMs are on their way to becoming “superintelligent”.
After minutes of meticulous research and quantitative analysis, I’ve come up with my own predictions about the future of AI.
“USG gets captured by AGI”.
Promise?
This commenter may be saying something we already knew, but it’s nice to have the confirmation that Anthropic is chock full of EAs:
(I work at Anthropic, though I don’t claim any particular insight into the views of the cofounders. For my part I’ll say that I identify as an EA, know many other employees who do, get enormous amounts of value from the EA community, and think Anthropic is vastly more EA-flavored than almost any other large company, though it is vastly less EA-flavored than, like, actual EA orgs. I think the quotes in the paragraph of the Wired article give a pretty misleading picture of Anthropic when taken in isolation and I wouldn’t personally have said them, but I think “a journalist goes through your public statements looking for the most damning or hypocritical things you’ve ever said out of context” is an incredibly tricky situation to come out of looking good and many of the comments here seem a bit uncharitable given that.)
Lots of discussion on the orange site post about this today.
(I mentioned this in the other sneerclub thread on the topic but reposted it here since this seems to be the more active discussion zone for the topic.)
I should probably mention that this person went on to write other comments in the same thread, revealing that they’re still heavily influenced by Bay ?Area rationalism (or what one other commenter brilliantly called “ritual multiplication”).
HN commenters are slobbering all over the new Grok. Virtually every commenter bringing up Grok’s recent full-tilt Nazism gets flagged into oblivion.