• 0 Posts
  • 2 Comments
Joined 1 month ago
cake
Cake day: May 16th, 2025

help-circle
  • I have a lot to say about Scott, being that I used to read his blog frequently and it affected my worldview. This blog title is funny. It was quite obvious that he at least entertained, if not outright supported, rationalists for a long time.

    For me, the final break came when he defended SBF. One of his defenses was that SBF was a nerd, so he couldn’t have had bad intentions. I share a lot of background with both SBF and Scott (we all did a lot of math contests in high school), but even I knew that it’s not remotely an excuse for stealing billions of dollars.

    I feel like a lot of his worldview centers around nerds vs everyone else. There’s this archetype of nerds being awkward, but well-intentioned and smart people who can change the world. They know better than everyone else on how to improve the world, so they should be given as much power as possible. I now realize that this cultural conception of a nerd actually has very little to do with how smart or well-intentioned you really are. The rationalists aren’t very good at technical matters (experts in an area can easily spot their errors), but they pull off this culture very well.

    Recently, I watched a talk by Scott, where he mentioned an anecdote when he was at OpenAI. Ilya Sutskever asked him to come up with a formal, mathematical definition to describe if “an AI loves humanity”. That actually pissed me off. I thought, can we even define if a human loves humanity? Yeah, surely all the literature, art, and music in the world is unnecessary now, we’ve got a definition right here!

    If there’s one thing I’ve learned from all this, it’s that actions speak louder than any number of 10,000 word blog posts. Perhaps the rationalists could stop their theorycrafting for once and, you know, look at what Sam Altman and friends are actually doing.


  • I know r/singularity is like shooting fish in a barrel but it really pissed me off seeing them misinterpret the significance of a result in matrix multiplication: https://old.reddit.com/r/singularity/comments/1knem3r/i_dont_think_people_realize_just_how_insane_the/

    Yeah, the record has stood for “FIFTY-SIX YEARS” if you don’t count all the times the record has been beaten since then. Indeed, “countless brilliant mathematicians and computer scientists have worked on this problem for over half a century without success” if you don’t count all the successes that have happened since then. The really annoying part about all this is that the original announcement didn’t have to lie: if you look at just 4x4 matrices, you could say there technically hasn’t been an improvement since Strassen’s algorithm. Wow! It’s really funny how these promptfans ignore all the enormous number of human achievements in an area when they decide to comment about how AI is totally gonna beat humans there.

    How much does this actually improve upon Strassen’s algorithm? The matrix multiplication exponent given by Strassen’s algorithm is log4(49) (i.e. log2(7)), and this result would improve it to log4(48). In other words, it improves from 2.81 to 2.79. Truly revolutionary, AGI is gonna make mathematicians obsolete now. Ignore the handy dandy Wikipedia chart which shows that this exponent was … beaten in 1979.

    I know far less about how matrix multiplication is done in practice, but from what I’ve seen, even Strassen’s algorithm isn’t useful in applications because memory locality and parallelism are far more important. This AlphaEvolve result would represent a far smaller improvement (and I hope you enjoy the pain of dealing with a 4x4 block matrix instead of 2x2). If anyone does have knowledge about how this works, I’d be interested to know.