RISC or CISC

From iGeek
RISCvCISC.png
During the 80s and 90s there was a Computer Chip design war about RISC or CISC.
During the 80s and 90s there was a Computer Chip design war about RISC or CISC. What does that mean, and which is better? For a while, Intel (and AMD) were able to spend more on design and delay the inevitable, but once mobility (with performance per watt) became important, ARM and RISC designs have taken over for x86's older CISC design.
ℹ️ Info          
~ Aristotle Sabouni
Created: 2000-11-24 

What is RISC

RISC is a design philosophy that was started in the late 70s or early 80s; and some origins go back further than that.

The idea was that people were adding more and more instructions and complexity to processors; and most compilers and programmers were only using a small fraction of those instructions effectively. Also, some of the instructions that were used were complex (did many things; like load from one place, alter something, and try to store back into memory, all at once). Those instructions not only took lots of logic, time to design, and were the most likely to have bugs, but they even held the rest of the processor back; because the rest of the instructions had to wait for that one instruction to complete before they could all advance to the next step.

Most instructions could be made out of a few smaller and simpler instructions; so by simplifying, they could reduce bottlenecks (both in time to design and time to execute).

RISC differed in where you spent your energy both in transistors (area on the chip), time to develop the chip, and simplifying the steps would allow the chip to run at faster speeds. So RISC was Reduced-Instruction-Set-Computing, compared to the older design philosophy, which was more complex. Reduced meant complexity mostly, but they also reduced instruction count, because that gave them space to do other things.

Now, once you reduced the fluff (seldom used or complex instructions), you suddenly had more space on the chip for other things; more design time to add other features, more power budget, and the difference between a complex instruction and a simple one was much less, so it was easier to design chips that ran faster. So they used that savings for things like more on-chip cache (so the instruction didn't have to go to slow-memory as often), more registers (same result), it could break the instruction it could reorder instructions internally, and sort of lets part of the program get ahead of other parts (as long as there were no dependencies) to better use resources it had, and so on.

It worked, and worked well. RISC chips ran faster, were easier to design, were more power efficient, and so on. The only downside was they tended to have simpler instructions and were a little less memory efficient; the simplification process also included making more things had to be the same size, instead of allowing them to be as small as possible.

Hybrids[edit | edit source]

CISC designers weren't stupid. Moore's law said that the number of transistors you have doubles every 18-24 months; so each generation of chip has more space to do things with. It wasn't that you couldn't add all the functions of RISC to CISC chips; it was about the costs (time, money, power, space, etc.) to do so.

CISC chips quickly borrowed many of the things that RISC did. CISC broke their instructions into many stages, they added more cache, and they even figured out ways to execute instructions out of order. Later, they basically broke the old ugly instruction set into a legacy area that converted those ugly complex instructions into more elegant RISC instructions, and then had a RISC computer underneath.

Now if that sounds inelegant, it is because it is. They had to sacrifice some space for the CISC-RISC converter, and in any conversion there's an efficiency loss. It took more transistors, more designers, more money and more time. The chips ran hotter or were less efficient, and so on. RISC took over most of the markets; they destroyed CISC from super computers to embedded markets, and in almost all vertical markets. But RISC basically failed in the mainstream market; which begs the question why?

CISC and the Intel camp figured, what if cost didn't matter? So what if it cost 2-4 times as much to design the chip, if you were making 10 times the money, you could afford it.

Intel's solution is called brute-force. They just kept pouring more money into the inelegant design, and started designing many generations ahead (to borrow time), and so on. Plus the trick of breaking the chip into an old-CISC side, and a newer elegant RISC side, reduced a lot of the penalties in design time and costs. In the end, they were able to keep up anyways; and they could grow fast enough that their markets stayed close enough to even that it just didn't matter.

So while RISC is better, it wasn't better enough to justify the huge switch costs that would come with changing something as fundamental as the instruction set on mainstream PC's; and all that would entail. Legacy software mattered.

The present and future[edit | edit source]

The main thing is that RISC has/had a window of opportunity. In 1980 processors had 50,000 transistors or so. By the 90's that was a millions. In 2020 we are floating around 10-20 billion -- but can go into trillions in some applications: however, now instead of just doing processors, we often put entire systems on a single chip.

Back in the 90's, dragging around a few hundred thousands transistors to execute old-style instructions and convert them to your cleaner RISC instructions was a big deal. Now days, throwing a million transistors at the problem is not nearly as big a deal. And each year that legacy engine matters less and less. So who cares? It is a drop in the bucket.

On a mainstream desktop PC, who cares if your processor eats a little more power, throws off more heat, or is a little less reliable (heat/power/complexity often effect the reliability of a chip). As processors go to multiple cores, the savings in one core can add up; and RISC could have a more significant advantage. But even 8 cores times a million transistor difference, on a few hundred million transistor chip wouldn't matter much.

Intel and AMD are lightly misleading people; trying to sell them that CISC is as good or better. That's not really the truth. It never is or was. RISC is still a simpler and better design philosophy, which is why they used it "under the hood". But the truth is that CISC kept up long enough, that now the legacy parts of the chip are nearly insignificant relative to the entire chip design/costs; and the majority of design time on their chips and RISC chips is still in the RISC parts of the CPU. So the differences aren't enough for most users to care about... on the desktop.

CISC even has a theoretical inherent advantage going forward. More complex instructions pack data tighter; and right now the greatest inefficiency (bottleneck) is memory bandwidth. We've increased memory size at a fast rate, but the speed at which memory runs has not increased as quickly as the speed at which processors can run; so it has fallen behind. This means the less memory you have to transfer, the less those bandwidth limitations will hold the processor back.

However, there are so many more inefficiencies in current CISC designs (like fewer registers, and so on) that any gains are still overwhelmed by the losses, so that advantage is not realized in the real world. The only way to realize it, would be to redesign the CISC instructions to inherently be more memory efficient (than they are), and fix some of the other problems with the legacy designs; but if the PC class chips tried to do that, they'd run into the same "new market" problems that made adopting RISC so difficult and expensive. The only company that has successfully followed this path is IBM on their mainframes; another market where legacy matters more than costs, and simultaneously where one company control enough of the compiler and application market that they can make such migrations feasible.

Conclusion[edit | edit source]

A clean instruction set and good design will always be superior to an ugly legacy based one; if equal amount of energy, effort and money were put into each. But life isn't that fair, so that's not the issue. The issue is will it be superior enough to matter? And superior enough in markets means good enough to get people to switch and reach critical mass. The answer to PC users, was "No" it wasn't superior enough. So RISC didn't gain critical mass, and can't alone make up for the differences in market size or momentum.

RISC design philosophy was and is better. It is the concept that you need to modernize your instruction set design, and use those increased efficiencies in other areas which may pay-off more. It worked, as it destroyed CISC thinking in areas where legacy didn't matter as much as power, time to market, cost, and other normal business factors. But in markets where legacy mattered more than good design, it couldn't win outright. Now it did win in that all CISC designers had to adopt and adapt to RISC like designs; but they still dragged their legacy instruction set around with them. So just because RISC is better in design, doesn't mean it is going to win; all industries are littered with the corpses of superior technologies.

The future is more RISC-like. Intel is trying to make the big jump to RISC with their Itanium chip (IA64), which is really RISC thinking just brought up to date, and with a few tweaks to it (EPIC/VLIW). Ironically, Intel is so late to the party with their RISC chip, that it may not matter any more. Furthermore, AMD is beating Intel at their own game; rather than designing a new instruction set from the ground up (and causing a revolution), they are just sort of "evolving" their way in, by adding to what they have, and improving it a little. This requires less change (or slower change), and will probably win out in the marketplace. However, even the AMD chip does try to add registers and modernize the ISA (instruction set architecture) a little in the new "mode" and make it a little more like RISC.

So RISC is not dead. With much smaller budgets, RISC has eaten most markets, and the design philosophies altered the rest. But a slightly better design hasn't been able to pull ahead enough to matter in the mainstream desktop market. Yet, even with far more money and energy, CISC has not been able to matter enough to drive RISC out of the market either. Now the lines have blurred so much, that it is getting harder to tell the difference. And in the future, who wins won't be based on one philosophy winning out, but just based on who executes better in the real world.

💭 Mobile processors
This article help up amazingly well for being 20 years old. One thing I was calling out was "desktop processors"... in gaming, servers and embedded Applications, RISC took over. What finally happened was the mobile revolution. Not just laptops, which have enough power/thermal budget to behave like little desktops, but the Smart Phone revolution. ARM processors (which was a 1980's RISC design) was just more power efficient for the same performance than Intel's efforts in mobile -- and in a phone, every little bit of power consumption mattered. Virtually every mobile phone is ARM and thus RISC based. That moved into tablets, which for Apple were just big iPhone's... for Android it is mixed, a few use Intel processors, and Microsoft's efforts to go to ARM failed the first time, but are likely to work out better the next time. There's talk of Apple (and Microsoft) moving to ARM processors in the future for Desktop/Laptops.

So we're not quite there yet. But it looks like the better chip design philosophy was able to be stalled until there was a new requirement: mobility. Then it has driven out CISC's older style processor design philosophy.

Eventually that bled back to laptops with Apple's custom chips, which are really just their version of ARM CPU's, with a lot of Apple's other specialize processing on top. RISC won out, it just took 40 years.

GeekPirate.small.png



🔗 More

Tech
Technology: Organizations, Reviews, People

Hardware
In computerese, if you can touch it, it's hardware. The computer, keyboard, screen, mouse, peripherals, rocks.

Programming
Computer Programming Articles.


🔗 Links

Tags: Tech  Hardware  Programming

Cookies help us deliver our services. By using our services, you agree to our use of cookies.