Not a lot of people know this, but there’s a hidden language that your computer and software use to perform tasks. Until not too long ago, this language spoken by computers and software was a complex jumble of elaborate “words,” much like a Shakespearean play. These complex, intricate, and polysyllabic “words” made up the vocabulary, or instruction sets, that guided software in telling hardware what to do.
But then in the 1980s, David Patterson and John Hennessy, two computer science pioneers, posed a simple yet very controversial question: What if computers could be made to speak in a simpler, more efficient language, one made up of short and easy “monosyllabic words?” It was a bet that sparked heated debates, but eventually reshaped the entire tech landscape that we now take for granted.
The computer architecture revolution
The central processing unit, or CPU, is the brain of your computer. Whenever you run a program on your laptop or smartphone, the CPU gets a list of instructions from the software so it can direct other parts of your computer to work together and perform a task. However, a CPU cannot directly interpret high-level programming languages like Python or Java that people use to write software. Instead, a translator, known as a compiler, converts this code into a simpler “machine language” that the CPU can understand.
The machine code defines a set of individual instructions, which can be very primitive, such as adding or subtracting two numbers or verifying if a number is equal to zero. However, a computer can perform complex tasks such as running a video game or playing a YouTube video by executing these simple operations billions of times per second.
All the commands for a CPU in machine language are grouped in what computer scientists refer to as an instruction set. This is the lowest level of programming for a computer where all the hard work is performed.
“When software talks to hardware, it speaks in a vocabulary. The technical term for that vocabulary is an instruction set. The words in that vocabulary are like the buttons on a calculator. It does add, subtract, multiply, really simple instructions like that,” Patterson told ZME Science during an interview at the 2023 Heidelberg Laureate Forum, an invite-only event where the laureates of the most prestigious awards in mathematics and computer science meet the next generation of up and coming young scientists.
In 2017, David Patterson, a retired Professor at the University of California, Berkeley, and John Hennessy, former President of Stanford University, were awarded the Turing Award for pioneering work that led to faster, more energy-efficient microprocessors called reduced instruction set computers (RISC), used in 99% of modern microprocessors. The Turing Award, named for renowned British mathemetician Alan M. Turing, is the world’s most prestigious prize in computer science, often likened to the Nobel Prize in the field.
While RISC is ubiquitous today, when Patterson and Hennessy proposed this instruction set architecture (ISA) in the early 1980s, not everyone was on board. Previously, Intel had made tentative steps in this direction with the release of the first RISC CPUs, such as the Intel 801. But people weren’t sold on RISC yet.
At the time, the conventional wisdom was that these instruction sets, or vocabularies, should comprise complex and powerful instructions. Instead of just adding, subtracting, multiplying, dividing, and the other simple operations typical of RISC, these complex instruction sets computers (CISC) would use polynomials, sorting algorithms, and other cumbersome mathematical operations and commands.
Patterson was part of the camp that believed a better way to build microprocessors — which were getting faster and faster each year, in accordance with Moore’s Law that says the number of transistors in a microprocessor doubles every two years — was to use a RISC architecture. The idea is that although you’d have to perform more instructions than a CISC microprocessor, computing would be made more efficient overall by simply running those instructions much faster.
“John and I thought that in a vocabulary instruction set for microprocessors, it would be better to have a lot of simple words, short words like monosyllabic words,” said Patterson.
“You can think of a program as reading a page of a book. It might take fewer words if you use sophisticated words, lots of syllable words, but it might be slower to read. If instead you had a lot of simple words, the book might be longer, but you might be able to read a lot faster.”
“So the question was, which is better?”
From CISC to RISC
But the benefits of this tradeoff were not at all immediately obvious. In fact, the debates were quite fierce, with critics being very vocal about the dangers of RISC, which they considered an inferior architecture that would set back the industry and make software worse for everyone. Those who acknowledged the technical merits of RISC were unconvinced by the economics. Virtually all computers at this time were CISC so switching to a radically new computer architecture was seen as too risky, like deciding to change the railroad track gauge overnight.
Remarkably, during these still early days of personal computing, much of the critique was based on tastes and hunches, in contrast to the quantifiable approach of Patterson and Hennessy. The two computer scientists wrote a now iconic textbook in the late 1980s called “Computer Architecture: A Quantitative Approach”, which revolutionized the field and provided some of the first groundwork for actually measuring and benchmarking the performance of various computers. Remarkably, even more than 30 years later, the book is still relevant despite the tremendous jumps in technology since the first edition.
“We still do get people coming up to us, thanking us for writing the book and how it changed their lives, and it was very important in their careers. I even had a student recently say, ‘I was in the hospital for a month, so I read your textbook like a novel from cover to cover,'” said Patterson.
Eventually, the quantitative approach enabled computer scientists to assess numbers from the hardware and compiler that tell you how well your computer does given the instruction set. At the end of the day, the way you measure performance in instruction sets is how long it takes a program to run on a computer — and RISC came out king as far as the number of instructions per clock cycle goes. Hard data had convinced the naysayers.
“It took a while to figure out the end of the story, but John and I were right, basically. It [RISC] was about three or four times faster. You read about 25% more words if they were simple, but you could read them five times faster. So the net effect was factors of three or four.”
So what did this mean for the average person? Your smartphones, iPads, and pretty much every device in the Internet of Things now operate on this RISC approach
“And so now, 99% of all smartphones, 99% of all iPads, and 100% of the Internet of Things, they all use reduced instruction set computers (RISC). So it was very controversial in the beginning, but 40 years later, RISC dominates all computers that most people use,” added Patterson.
Feature | RISC Architecture | CISC Architecture |
---|---|---|
Instruction Set | Simple and limited | Complex and extensive |
Clock Cycles per Instruction | Usually 1 cycle per instruction | Multiple cycles for some instructions |
Instruction Length | Fixed-length instructions | Variable-length instructions |
Execution Speed | Generally faster for simple tasks | May be faster for complex tasks |
Code Density | Lower (more memory needed) | Higher (less memory needed) |
Compiler Complexity | Less complex | More complex |
Power Efficiency | Generally more power-efficient | Generally less power-efficient |
Design Philosophy | Do simple tasks but do them quickly | Do more with each instruction |
Typical Use Cases | Embedded systems, mobile devices | General-purpose computing |
Examples | ARM, MIPS | Intel x86, AMD |
Modern computer architecture
Virtually all computers operating today are based on one of two instruction set architectures (ISAs). The first is the CISC-based x86 — most desktops, laptops, and servers run on x86 processors from Intel or AMD. Although x86 is technically a CISC architecture, Intel has made many impressive innovations over the years that break down instructions into smaller, simpler RISC-like internal operations called micro-ops. For this reason, in today’s world, RISC and CISC are no longer the black-and-white distinction they might have once been. You can say that most CPU architectures have evolved to different shades of grey.
The other popular ISA is ARM, meant for RISC microprocessors, which dominates Android and iOS devices, as well as newer Apple computers. All the smartphones in the world run on ARM, as well as all the networked sensors and devices that run on the so-called Internet of Things.
Both x86 and ARM are proprietary instruction sets, meaning developers need to pay a fee to use these architectures in their hardware. But now, there’s RISC V (pronounced “risk-five”) — the fifth generation of RISC architecture developed at UC Berkeley more than ten years ago. Unlike other ISAs, RISC V is free and open, so developers don’t need to pay a licensing fee. This is a game-changer in an industry where proprietary architectures have long held sway.
Without the burden of licensing fees, manufacturers can produce cheaper hardware. This could lead to more affordable smartphones, laptops, and even data centers, making technology more accessible to people worldwide. This flexibility is particularly beneficial for emerging technologies like the Internet of Things (IoT) and artificial intelligence, where specialized hardware can make a big difference.
RISC-V is already making waves in the tech industry. Companies like Google, Samsung, and Qualcomm are exploring its potential, and it’s becoming increasingly popular for embedded systems — those tiny computers inside everything from your car to your washing machine.
Next time you casually scroll through your smartphone or ask Siri a question, remember: The effortless speed you experience is thanks to a language revolution, one that dared to simplify the complex and, in doing so, changed the world.