tr:dr; he says “x86 took over the server market” because it was the same architecture developers in companies had on their machines thus it made it very easy to develop applications on their machines to then ship to the servers.

Now this, among others he made, are very good points on how and why it is hard for ARM to get mainstream on the datacenter, however I also feel like he kind lost touch with reality on this one…

He’s comparing two very different situations, more specifically eras. Developers aren’t so tied anymore like they used to be to the underlaying hardware. The software development market evolved from C to very high language languages such as Javascript/Typescript and the majority of stuff developed is done or will be done in those languages thus the CPU architecture becomes irrelevant.

Obviously very big companies such as Google, Microsoft and Amazon are more than happy to pay the little “tax” to ensure Javascript runs fine on ARM than to pay the big bucks they pay for x86…

What are your thoughts?

  • I’m pretty sure the whole CISC versus RISC thing died out in the early 2000s after everyone but Apple switched to x86 (and so did Apple after it became apparent that PPC couldn’t beat Intel).

    In the days of updateable microcode, I don’t think this matters all that much. CISC has some theoretical advantages but Apple has shown it can handily beat Intel’s ass on RISC with their M1 and M2 chips.

    The only reason ARM survived at all is that ARM managed to build a reasonably efficient SoC design for phones. MIPS died a slow death being stuck in routers and appliances, but barely anything runs MIPS anymore. Had AMD or Intel been able to produce a power efficient CPU, I’m sure ARM would’ve died out too.