• fubo@lemmy.world
    link
    fedilink
    arrow-up
    38
    ·
    1 year ago

    Today we have 64-bit computers (e.g. amd64), which descended from 32-bit computers (i386), which descended from 16-bit computers (Intel 8086), which descended from 8-bit computers (Intel 8008). Bit widths in our world naturally follow powers of two.

    However, some 1960s computers used word sizes that weren’t powers of two. Both IBM and DEC, among others, made 18- and 36-bit systems. Suppose that computing had continued to follow a multiples-of-nine pattern instead of the powers-of-two pattern?


    For one thing, hexadecimal is less common. If you’re writing 9-, 18-, 36-bit values, you typically write in octal, not hex. (In our world, Unix permissions modes are written in octal; Unix originated on the PDP-7, an 18-bit system.)

    IPv4 addresses are 36 bits wide instead of 32, and you write them in octal instead of decimal. localhost is 700.0.0.1, and a typical LAN subnet mask is 777.777.777.0.

    No hexadecimal means no 0xDEADBEEF or 0xCAFEBABE jokes. However, memory or files that get overwritten with junk are said to be “525’d”, because binary 101010101... is octal 525252....


    char would be nine bits wide instead of eight. This affects the development of character sets.

    In our world, ASCII was originally a 6-bit encoding, expanded to 7-bits to support lowercase. IBM then extended it to 8-bits with code pages for different European languages, creating 8-bit PC extended ASCII. However, no single code page supports all European languages, to say nothing of non-European ones. This led to the invention of multibyte character encodings and ultimately Unicode.

    In 9-bit world, multibyte characters are adopted earlier, using the high bit to indicate an extended character. Code pages don’t get invented; mojibake never happens.


    With 36-bit time_t, the Year 2038 problem doesn’t happen; the time_t’s don’t wrap around until the year 3058!


    A 3½" high-density floppy disk stores one megabyte of 9-bit bytes.