3G6A5W338E@alien.topBtoHardware@hardware.watch•The HWINFO Story: Martin Malik, the Face and the History Behind the Well-Known Diagnostic Tool | igor´sLABEnglish
1·
1 year agoIt wouldn’t be welcome.
That ecosystem doesn’t want binaries.
It wouldn’t be welcome.
That ecosystem doesn’t want binaries.
I love how they resurrected and maintain the DOS version still.
Note the mini-itx board with the 16xP670+8xX280 chip can be preordered. It is due in some 8 months and will cost around $120.
At that price, at that point in time, it will be quite powerful.
64bit cpu not needed for that. See PAE.
The actual limiting factor (in x86 specifically) is that a single process view on memory is 32bit thus 4GB. This is specific to the design of the CPU; it’s very well possible to get around that with techniques such as overlays or segmentation, as 16bit x86 demonstrated very well.
Then there’s processors like the 68000, which offered a 32bit ISA with direct 32bit addressing (although only 24 exposed in the physical bus, until 68010 had versions with more address lines, and 68020 with full 32bit), despite 16bit ALU.
Similarly, SERV implements a compliant RISC-V in a bit-serial manner.
Of course, having 64bit GPRs specifically is very convenient past 4GB.
Large offsets are possible in 32bit too. In e.g. Debian Linux, it is common in all architectures other than x86.
32bit block addressing to 512 byte blocks yields 2TB.
And again, software can handle 64bit values in 32bit (even 16 and 8) architectures no problem. It’s just slower and more cumbersome, but the compiler will abstract this away. For disk I/O addressing, it is a non-issue, as latency of the disk will make the cost of these calculations irrelevant.