1736496439649

content by Xing Chen

It seems we are back to the decades long CISC vs. RISC debate of 80s and 90s again, which one is better than the other? Which one should we go for? The X86–64 CISC or the ARM RISC? https://medium.learningbyshipping.com/risc-v-cisc-an-age-old-debate-79d859668d35

Attention, nowadays, this CISC vs RISC debate is different than decades ago, or even irrelevant! https://class.ece.iastate.edu/tyagi/cpre581/papers/HPCA13powerstruggles.pdf

For the second time, I turned to the artificial intelligence, I asked ChatGPT to give me a table of quantity of instructions in each living mainstream instruction set, hereunder the table is what ChatGPT has to tell me:

Amazing, right!

This comparison can, at least, tell us one reason why ARM is more energy efficient than X86–64: Because its instruction set is small, thus you don’t need a complex instruction decoding logic and don’t need a larger die area for those decoding logics, all these can contribute to the energy efficiency! So, small is beautiful, no?

The big quantity of instructions in the X86–64 instruction set originates from the Complex nature of CISC, which stands for Complex Instruction Set Computer; on top of that, each time an extension will introduce new instructions. One example is the AVX-512 extension, which added around 300 new instructions.

We have to think about X86’s quantity of instructions one way and the other way round: Had big instruction set made X86 successful, or because it is successful, it has an even bigger instruction set? I don’t have a clear answer, but I would like to simply point out that X86–64 is backward compatible, this means you can never have a big clean up to its instruction set.

Again, I take the Itanium CPU as an example, as I mentioned in part 2, started from around 2000, Intel would like to deviate from the X86–32 and move to IA-64 instruction set, and at the same time they also wanted IA-64 to be backward compatible with X86–32, this ended up with an Itanium’s 32-bit emulation mode, which works, according to somebody, but it works clumsily, some even claim it had never worked, and I don’t know how complex it is for IA-64 to accommodate this emulation mode. But this shows the importance of backward compatibility and its cost, measured by instruction set size.

The generation to generation backward compatibility also contributes to X86’s success, there is no argument, but it also bloats its instruction set despite its CISC nature. I found an interesting blog about this topic here: https://www.agner.org/optimize/blog/read.php?i=963

Share via
Copy link
Powered by Social Snap