this is Habryka talking about how his moderating skills are so powerful it takes lesswrong three fucking years to block a poster who’s actively being a drain on the site
here’s his reaction to sneerclub (specifically me - thanks Oliver!) calling LessOnline “wordy racist fest”:
A culture of loose status-focused social connection. Fellow sneerers are not trying to build anything together. They are not relying on each other for trade, coordination or anything else. They don’t need to develop protocols of communication that produce functional outcomes, they just need to have fun sneering together.
He gets us! He really gets us!
The thing that united [Occupy Wall Street] was a shared dislike of something in the vague vicinity of capitalism, or government, or the man…
Was it not, specifically, Wall Street?
Nice, I petittion for this to be the new description of SneerClub just like that magnificent Yud quote was on Reddit
lobste.rs just banned an 11 year old account with almost 5,000 comments and 45k karma for being a transphobic jerk. No muss, no fuss, no apologetic blogpost where the user could defend themselves and rile up the masses.
That’s how you do it, people.
edit I have now skimmed the comments where banned use Said can explain himself, and he’s using his last efforts to nobly defend himself, thanking his admirers, and generally projecting an image of a man wrongly accused.
j/k he’s doubling down on being a dick.
j/k he’s doubling down on being a dick.
I had kind of gotten my hopes up from the comparisons of him to sneerclub that maybe he’d be funny or incisively cutting or something, but it looks mostly like typical lesswrong pedantry, just less awkwardly straining to be charitable (to the in-group).
oh, i’m laughing now. it’s actually beautiful that it was the anubis’ anime jackal girl that forced him to drop the plausible deniability shield and go full queerphobic.
I finally found which user we’re talking about and I am quietly delighted at that smarmy fucker being directed to the fourth-floor egress.
edit: here’s the long form mod last warning. You’ll see in that thread my next prediction for ejection via the fourth floor, whose profile shows he’s into crypto.
they finally got that asshole? took ’em long enough
yeah sorry I had the username (“friendlysock”) in a first draft then forgot to add it
good riddance to bad rubbish
…wby does that username ring bells in my brain
did it show up here somewhat recently? (I ask right before checking search)
(e: nothing immediately in search but I could swear I’ve seen that name somewhere in the last few months (and not in a good context))
smarmy not as cryptic as he thought right-winger on lobsters who didn’t quite hide his power level
shit you’re right, I should search offsite (active) chats too
(like, largely it just bugs me where I know the name from (because being baseline horrendous at recalling names and then recognising this one is uhhhh))
You know, this whole conversation reminds me of the discussion of moderation policy I remembered from a gaming blog I used to read somewhat religiously. I think the difference in priorities is pretty significant. In Shamus’ policy the primary obligation of the moderator is to the community as a whole to protect it from assholes and shitweasels. These people will try to use hard-and-fast rules against you to thwart your efforts, and so are best dealt with by a swift boot. If they want to try again they’re welcome to set up a new account or whatever and if they actually behave themselves then all the better. I feel like this does a far better job of creating a welcoming and inclusive community even when discussing contentious issues like the early stages of gamergate or the PC vs Console wars. Also it doesn’t require David to drive himself fucking insane trying to build an ironclad legal case in favor of banning any particular Nazi, including nearly a decade of investigation and “light touch” moderation.
Also in grabbing that link I found out that Shamus apparently died back in 2022. RIP and thanks for helping keep me from falling into the gamergate or Rationalist pipelines to fascism.
btw I read Said’s responses to his banning and if that dude ever shows up here he’s gone the second he’s spotted
They gave him a thread in which to complain about being banned… Are these people polyamorous just because they don’t know how to break up?
Funniest are all the commenters loudly complaining about this decision and threatening/promising to delete their accounts.
that habryka dude sure loves the sound of his voice.
tbf being able to write thousand word long blog posts and using phrases like “good and important” is part of his job description
That it took this long to ban this guy and this many words is so delicious. What a failure of a community. What a failure in moderation.
Based on the words and analogies in that post: participating in LW must be like being in a circlejerk where everyone sucks at circlejerking. Guys like Said run around the circle yelling at them about how their technique sucks and that they should feel bad. Then they chase him out and continue to be bad at mutual jorkin.
E: That they don’t see the humor in sneering at “celebrating blogging” and that it’s supposedly us at our worst is very funny.
you can tell the real problem was I called them racist
in greggs?
You live rent-free in so many big ol noggins.
All that acreage has to be adding up. Have you ever considered going into real estate?
You called them racist without proving from first principles it is bad to be racist, that they are racist, and their specific form of racism is also bad and will not lead to better outcomes in than being non-racist in the megafuture.
Hey if a tree is racist in the woods and two nerd blogs that pretend to be diametrically opposed on the political spectrum but are actually just both fascist don’t spend millions of words discussing it, is it really racist or should we assume more good faith
Lol I literally told these folks, something like 15 years ago, that paying to elevate a random nobody like Yudkowsky as the premier “ai risk” researcher, in so much that there is any AI risk, would only increase it.
Boy did I end up more right on that than my most extreme imagination. All the moron has accomplished in life was helping these guys raise cash due to all his hype about how powerful the AI would be.
The billionaires who listened are spending hundreds of billions of dollars - soon to be trillions, if not already - on trying to prove Yudkowsky right by having an AI kill everyone. They literally tout “our product might kill everyone, idk” to raise even more cash. The only saving grace is that it is dumb as fuck and will only make the world a slightly worse place.
The billionaires who listened are spending hundreds of billions of dollars - soon to be trillions, if not already - on trying to prove Yudkowsky right by having an AI kill everyone. They literally tout “our product might kill everyone, idk” to raise even more cash. The only saving grace is that it is dumb as fuck and will only make the world a slightly worse place.
Given they’re going out of their way to cause as much damage as possible (throwing billions into the AI money pit, boiling oceans of water and generating tons of CO2, looting the commons through Biblical levels of plagiarism, and destroying the commons by flooding the zone with AI-generated shit), they’re arguably en route to proving Yud right in the dumbest way possible.
Not by creating a genuine AGI that turns malevolent and kills everyone, but in destroying the foundations of civilization and making the world damn-nigh uninhabitable.
Consider, however, the importance of building the omnicidal AI God before the Chinese.
some UN-associated ACM talk I was listening to recently had someone cite a number at (iirc)
$1.5tn total estimated investment$800b[0]. haven’t gotten to fact-check it but there’s a number of parts of that talk I wish to write up and make more knownone of the people in it made some entirely AGI-pilled comments, and it’s quite concerning
this talk; looks like video is finally up on youtube too (at the time I yanked it by pcap-ing a zoom playout session - turns out zoom recordings are hella aggressive about not being shared)
the question I asked was:
To Csaba (the current speaker): it seems that a lot of the current work you’re engaged in is done presuming that AGI is a certainty. what modelling you have you done without that presumption?
response is about here
[0] edited for correctness; forget where I saw the >$1.5t number
Yeah a new form of apologism that I started seeing online is “this isn’t a bubble! Nobody expects an AGI, its just Sam Altman, it will all pay off nicely from 20 million software developers worldwide spending a few grand a year each”.
Which is next level idiotic, besides the numbers just not adding up. There’s only so much open source to plagiarize. It is a very niche activity! It’ll plateau and then a few months later tiny single GPU models catch up to this river boiling shit.
The answer to that has always been the singularity bullshit where the biggest models just keep staying ahead by such a large factor nobody uses the small ones.
hearing him respond like that in real time and carefully avoiding the point makes clear the attraction of ChatGPT
from the (extensive) footnotes:
Occupy Wallstreet strikes me as another instance of the same kind of popular sneer culture. Occupy Wallstreet had no coherent asks, no worldview that was driving their actions.
it’s so easy to LessWrong: just imagine that your ideological opponents have no worldview and aren’t trying to build anything, sprinkle in some bullshit pseudo-statistics, and you’re there!
Lesswrong and SSC: capable of extreme steelmanning of… check notes… occult mysticism (including divinatory magic), Zen-Buddhism based cults, people who think we should end democracy and have kings instead, Richard Lynn, Charles Murray, Chris Langan, techbros creating AI they think is literally going to cause mankind’s extinction…
Not capable of even a cursory glance into their statements, much less steelmanning: sneerclub, Occupy Wallstreet
Those examples are the Ingroup. We are the Outgroup.
It is gonna be worse, they can back up their statements by referring to people who were actually there, but they person they then would be referring to is Tim Pool, and you can’t as an first principles intellectual of the order of LessWrong, reveal that actually you get your information from disgraced yt’ers like all the other rightwing plebs. It has to remain an unspoken secret.
A small sidenote on a dynamic relevant to how I am thinking about policing in these cases:
A classical example of microeconomics-informed reasoning about criminal justice is the following snippet of logic.
If someone can gain in-expectation X dollars by committing some crime (which has negative externalities of Y>X dollars), with a probability p of getting caught, then in order to successfully prevent people from committing the crime you need to make the cost of receiving the punishment (Z) be greater than X/p, i.e. X<p∗Z.
Or in less mathy terms, the more likely it is that someone can get away with committing a crime, the harsher the punishment needs to be for that crime.
In this case, a core component of the pattern of plausible-deniable aggression that I think is present in much of Said’s writing is that it is very hard to catch someone doing it, and even harder to prosecute it successfully in the eyes of a skeptical audience. As such, in order to maintain a functional incentive landscape the punishment for being caught in passive or ambiguous aggression needs to be substantially larger than for e.g. direct aggression, as even though being straightforwardly aggressive has in some sense worse effects on culture and norms (though also less bad effects in some other ways), the probability of catching someone in ambiguous aggression is much lower.
Fucking hell, that is one of the stupidest most dangerous things I’ve ever heard. Guy solves crime by making the harshness of punishment proportional to the difficulty of passing judgement. What could go wrong?
Never raise an eyebrow without dropping the banhammer
@Amoeba_Girl @sneerclub isn’t this exactly the same “logic” that escalated the zizians to multiple murders?
Hmm, yes, I must develop a numerical function to determine whether or not somebody doesn’t like me…
One thing he gets is that direct aggression is definitely more effective in this situation. I can, and do, tell these people to fuck straight off, and my life is better for it!
“So, what are you in for?” “Making a right turn on a bicycle without signalling continuously for the last 100 feet before the turn in violation of California Vehicle Code 22108”
“… And litterin’.”
“…And creatin’ a nuisance”
Jesus christ, just ban the guy! Don’t write a million words about how much he gets under your skin! Rude!!!
Indeed, the LinkedIn attractor appears to be the memetically most successful way groups relate to their ingroup members, while the sneer attractor governs how they relate to their outgroups.
AND OLIVER COMES IN FROM THE TOP ROPE WITH THE HOTDOG COSTUME
Moderators need the authority to, at some level, police the vibe of your comments, even without a fully mechanical explanation of how that vibe arises from the specific words you chose.
hey everyone i am going to become top mod on this forum, now let me just reinvent human interaction from first principles
How it started: gonna build the robotgod but nice
How it went: wow we need to teach people how to think.
How it ended: we cant do basic things people have done since we decided to walk upright because some people are mean.
Even 4chan can trade/coordinate/and have functional outcomes, sure often for evil. But most of us are not even active on lw. Skill issue.
The death penalty of not just you but your whole family if you copy that floppy.
thermonuclear ballistic missile on lightcone infra for all the time and brains they have wasted
With apologies to Stross: “you shall not copy floppies in my lightcone”
Even 4chan can trade/coordinate/and have functional outcomes, sure often for evil.
To give a rather notorious example, there’s the He Will Not Divide Us flag in 2017, which the 'channers tracked down after only 38 hours, despite Shia LeBouf’s attempts to keep the location hidden.
The death penalty of not just you but your whole family if you copy that floppy.
The future media conglomerates want. (okay maybe not the “death penalty” part - dead people don’t make money)
Re the flag.
Not just that, but they also on a less malicious case id4chan, and now id6chan were also 4chan productions iirc. (With others from the internet also helping). Which documents all kinds of strange warhammer lore, the /tg/ interpretation of that and their various hate for certain authors of the games. For example https://1d6chan.miraheze.org/wiki/Robin_Cruddace
The flag was the most obvious one I could think of, given how many eyes were already on HWNDU and how swiftly they found it. In retrospect, I should’ve chosen 1d4chan/1d6chan as my example, given how large and robust it is as a wiki.
The SCP Foundation arguably qualifies as well - it began on /x/ as a random post, before morphing into the ongoing collaborative writing project we all know and love.
Well, the first thing I thought about was also the flag (but more because people brought it up a while back), and TIL about the SCP foundation (despite me talking about 4chan from time to time I never have been a channer, I have only very rarely posted some stuff in the roguelikes topic, and left when people let the neo-nazis in who kept calling people who asked a bit of money for a roguelike jews. Just sucks they never heard of the nazi bar stuff.
Eponymous even. Guess they don’t know who named sneerclub.
Mister Sneerclub of the Newport Sneerclubs, of course.
John Sneerclub
Eliezer Sneerclub!
Only for friends, so we should call him Mister Sneerclub. Or Her Sneer if you are German and want to be informal.
we cant do basic things
That’s giving them too much credit! They’ve generated the raw material for all the marketing copy and jargon pumped out by the LLM companies producing the very thing they think will doom us all! They’ve served a small but crucial role in the influence farming of the likes of Peter Thiel and Elon Musk. They’ve served as an entry point to the alt-right pipeline!
dath ilan?
As a self-certified Eliezer understander, I can tell you dath ilan would open up a micro-prediction market on various counterfactual ban durations. Somehow this prediction market would work excellently despite a lack of liquidity and multiple layers of skewed incentives that should outweigh any money going into it. Also Said would have been sent to a
reeducation camp, quiet city andsterilizeddenied UBI if he reproduces for not conforming to dath ilan’s norms much earlier.
I’m feeling an effort sneer…
For roughly equally long have I spent around one hundred hours almost every year trying to get Said Achmiz to understand and learn how to become a good LessWrong commenter by my lights. Every time I read about a case like this my conviction grows that sneerclub’s vibe based moderation is the far superior method!
The key component of making good sneer club criticism is to never actually say out loud what your problem is. We’ve said it multiple times, it’s just a long list that is inconvenient to say all at once. The major things that keep coming up: The cult shit (including the promise of infinite AGI God heaven and infinite Roko’s Basilisk hell; and including forming high demand groups motivated by said heaven/hell); the racist shit (including the eugenics shit); the pretentious shit (I could actually tolerate that if it didn’t have the other parts); and lately serving as crit-hype marketing for really damaging technology!
They don’t need to develop protocols of communication that produce functional outcomes Ahem… you just admitted to taking a hundred hours to ban someone, whereas dgerad and co kick out multiple troublemakers in our community within a few hours tops each. I think we are winning on this one.
For LessWrong to become a place that can’t do much but to tear things down. I’ve seen some outright blatant crank shit (as opposed to the crank shit that works hard to masquerade as more legitimate science) pretty highly upvoted and commented positively on lesswrong (GeneSmith’s wild genetic engineering fantasies come to mind).
The key component of making good sneer club criticism is to never actually say out loud what your problem is.
I wrote 800 words explaining how TracingWoodgrains is a dishonest hack, when I could have been getting high instead.
But we don’t need to rely on my regrets to make this judgment, because we have a science-based system on this
podcastinstance. We can sort all the SneerClub comments by most rated. Nothing that the community has deemed an objective banger is vague.The problem is they dont read sneerclub well, so they dont realize we dont relitigate the same shit every time. So when they come in with their hammers (prediction markets, being weird about ai, etc) we just go ‘lol, these nerds’ and dont go writing down the same stuff every time. As the community has a shared knowledge base, they do the same by not going into details every time how a prediction market would help and work. But due to their weird tribal thinking and thinking they are superior they think when we do it it is bad.
It is just amazing how much he doesn’t get basic interactions. And not like we dont like to explain stuff when new people ask about it. Or often when not even asked.
Think one of the problems with lw is that they think stuff that is long, is well written and argued, even better if it used a lot of complex sounding words. see how they like Chris Langan as you mentioned. Just a high rate of ‘I have no idea what he is talking about but it sounds deep’ shit.
To quote from the lw article you linked on the guy
CTMU has a high-IQ mystique about it: if you don’t get it, maybe it’s because your IQ is too low. The paper itself is dense with insights, especially the first part.
Makes you wonder how many people had a formal academic education, as one of the big things of that is that it has none of this mystique, as it build on top of each other and often can feel reasonable easy and making sense. (Because learning the basics preps you for the more advanced stuff, which is not to say this is the case every time, esp if some of your skills are lacking, but none of this high-IQ mystique (which also seems the utter wrong thing to look for)).
I’ve seen some outright blatant crank shit (as opposed to the crank shit that works hard to masquerade as more legitimate science) pretty highly upvoted and commented positively on lesswrong (GeneSmith’s wild genetic engineering fantasies come to mind).
Their fluffing Chris Langan is the example that comes to mind for me.
Ya don’t debate fascists, ya teach them the lesson of history. The Official Sneerclub Style Manual indicates that this is accomplished with various pedagogical tools, including laconic mockery, administrative trebuchets, and socks with bricks in them.
That too.
And judging by how all the elegantly charitably written blog posts on the EA forums did jack shit to stop the second manifest conference from having even more racists, debate really doesn’t help.
Blockquote glitch?
Yes, thanks. I always forget how many enters i need to hit.