- Moore’s Law isn’t a law at all; it’s a general insight into technology that proposes the computing power would double every two years.
- Specifically, it refers to the amount of transistors that could fit on a circuit would be doubling with in the timeframe, which would correlate to doubling the processing power.
- Moore’s Law has actually slimmed down to doubling every 18 months!
- The future of the law is not exactly bright, because cooling limitation based on the power to size relationship.
What Is Moore’s Law? – Complete Explanation
In 1965, Gordon Moore, an American engineer who went on to co-found Intel, hypothesized that the modern pace of invention would find a way to increase at a steady rate the amount of transistors that manufacturers could pack into a given area on an integrated circuit. He predicted the total would double more or less every two years.
The most important implication of this idea is that as transistors get smaller, the rise in efficiency will lead to computing power and computers that are increasingly faster and cheaper. Examples of digital electronics innovations that have happened thanks to the push toward Moore’s Law include the steady increase in flash memory and RAM capacity, the steady reduction in microprocessor prices and even the amazing progress in the pixel quality of your phone’s camera.
Moore based his statement on a history of emerging trends he had noticed in computer architecture and chip design. His intention wasn’t to create a fixed formula, and he wasn’t the one who named his observation “Moore’s Law.” His idea doesn’t fit into the definition of a real law in any legal sense or even the definition of a theory in the scientific sense. It was an insight based on historical data that, with time, turned into an uncanny prediction, which sensationalistic journalism has now simplistically codified as a golden rule.
As time passed, the specific time frame of Moore’s prediction was changed slightly to reflect changing reality. In the end, instead of two years, the doubling rate has held steady to about every 18 months. The small changes in rate haven’t done much to change the massive exponential implications of Moore’s Law, and we’ve seen decades of almost nonstop innovation and opportunity in the semiconductor industry.
Part of the reason Moore’s Law has held is simply because the semiconductor industry has worked hard to make sure it did. In the years after Gordon Moore’s original prediction, semiconductor manufacturers used it as a guide to help them set targets for their long-term research and development plans. They did everything they could to keep the progress graph of the semiconductor industry in sync with the inexorably rising graph of Moore’s Law.
Many actors have poured resources into keeping Moore’s Law alive, including microprocessor manufacturers, scores of industrial and academic researchers and even governments of some of the world’s most powerful countries. This has worked to create a self-fulfilling prophecy that has driven the explosion of technological progress and the booming economic growth and productivity we’ve seen in recent history.
Moore’s Law: An Exact Definition
Moore’s Law is a reductive title for the idea, first postulated by American engineer Gordon Moore in the 1960s, that approximately every two years, technology doubles the total number of transistors that manufacturers can squeeze into an integrated circuit. In general terms, it implies that computing power increases exponentially over time.
How Does Moore’s Law Work?
Transistors are one of the most elementary building blocks of electronic computers. They’re usually made of layers of treated silicon or some other semiconducting material carefully spliced together in order to affect electrons in specific ways.
There are different kinds of transistors, but fundamentally, they work to conduct electricity from one place to another on a chip. They can act like switches or logic gates to perform Boolean functions by amplifying an electronic signal, feeding it back into the input or blocking it altogether.
One step above transistors in the computing chain, integrated circuits or microchips are tiny square-shaped wafers of silicon containing as many transistors as possible. The transistors are connected by thin lines called interconnects on the wafer’s surface, which are generally made from aluminum, copper or sometimes gold. The more interconnected transistors a microchip manufacturer can cram into as small a space as possible, the faster the microchip will conduct electricity and the faster the computer will run.
The core of Moore’s 1965 realization was based on an engineering pattern he identified: Every new generation of transistors was smaller than the last. This engineering pattern also had an economic correlate: The cost of a transistor seemed to have an inversely proportional relationship to the number of transistors produced. The formula seemed to be that the more transistors were made by manufacturers, the cheaper it was to make them. Together, these two ideas seemed to imply that computing power could grow at an exponential rate.
As integrated circuits became smaller, faster and more powerful, they began to evolve into a kind of general purpose technology. Rather than doing one specific thing, the integrated circuit, like the wheel, could be applied to many different areas and industries. The exponential explosion of integrated circuits is behind countless technological breakthroughs, including smartphones and laptops, GPS, nanotechnology, artificial intelligence and genetic medicine. Some economists credit the integrated circuit with as much as a third of the economic growth in the USA since the 1970s.
Who Discovered Moore’s Law?
Gordon Moore received his bachelor’s degree in chemistry in 1950 from the University of California and then his doctorate, also in chemistry, from Caltech four years later. Upon graduation, he joined Maryland’s Johns Hopkins University to study solid rocket propellants in their Applied Physics Laboratory. After two years of working there with the U.S. Navy to improve antiaircraft missiles, Moore decided to move to private industry where his research would have fewer limits and less red tape.
In 1956, Moore joined California’s Shockley Semiconductor Laboratory to research better manufacturing practices for silicon transistors under William Shockley. Shockley had won a shared Nobel Prize for his work in helping to invent the transistor. In less than two years under Shockley’s hectic management, Moore had had enough. He resigned to open a new company, Fairchild Semiconductor Corporation, along with seven other colleagues, including Robert Noyce, a co-inventor of the integrated circuit.
In 1968, Moore and Noyce exited Fairchild to start a new company, which they called Intel Corporation. Their focus would be on improving microchips by having the company’s scientists, engineers and other researchers work directly on chip production to try to bring theory closer to practice. This marriage brought Intel their first of many commercial successes: semiconductor memory chips made from magnetic oxide.
In 1965, shortly before Moore left Fairchild, he was interviewed by Electronics magazine for a special issue. The interviewer asked him for his predictions for the near future of electronic technology, and in response, he wrote a short paper titled “Cramming More Components Onto Integrated Circuits.” He’d observed that the total number of components in a circuit was doubling more or less every year, so he projected this trend forward indefinitely. Moore’s original prediction was that in 1975, 10 years from his interview, each integrated circuit would be able to hold 65,000 components.
His prediction didn’t turn out to be dead on target, but it was pretty close. When 1975 came around, Moore saw that the history of growth was a bit slower than he’d thought, so he extended his doubling time frame from one year to two years. This revised formula turned out to be pessimistic. In the more or less 50 years since his prediction, the number of transistors has actually doubled around every 18 months.
What Are the Applications of Moore’s Law?
Thanks to the semiconductor industry’s pursuit of Moore’s Law, the speed and efficiency of computing has steadily increased while the cost has fallen. As integrated circuits and their transistors become more and more microscopic, all related digital technology gets faster, smaller and generally cheaper.
As digital technology gets better, basically every part of our digital society improves along with it. Devices become more mobile, spreadsheets become easier to use, GPS systems and weather forecasts get more accurate, etc. Examples of industries that improve with faster and smaller computation include education, energy production, nanotechnology, health care, transportation and many more.
Examples of Moore’s Law in the Real World
In practice, Moore’s Law means that every two years or so, electronic devices and personal computers are able to do twice as much as they could before.
In 1970, a chip containing 2,000 transistors cost around $1,000. In 1972, you could buy that same chip for about $500, and in 1974, its price went down to about $250. In 1990, the price of that same amount of transistors was only $0.97, and today, the cost is under $0.02.
In 1990, a typical personal computer cost around $3,000. In 1992, the same amount of computing power cost $1,500 and fell to $750 by 1994. Today, it would only cost around $5.
The reliably shrinking dimensions of transistors were what made possible this exponential explosion of circuit complexity. In the 1940s, manufacturers measured transistors in millimeters. By the 1980s, transistors were down to less than a micrometer, a millionth of a meter, which allowed dynamic random-access memory chips to offer storage capabilities measured in megabytes.
In the 2000s, transistors measured around 0.1 micrometer, and memory chips offered storage capabilities in the gigabyte range. In the 2010s, they began to measure transistors in nanometers, which are meters divided into a billion parts. This was a reduction factor of more than 100,000 from the 1940s.
As Moore’s Law continues into the 2020s, we’re beginning to see advances in nanotechnology that use three-dimensional patterns to allow manufacturers to place even more tiny transistors on their chips. The latest microchips contain more than 50 billion transistors and require an extremely delicate placement process involving multiple exposures and enormous complexity.
Most of this increase in microchip complexity has involved scaling innovations. Now that we’ve reached the nanotechnology level, scale complexity is slowing down the progress graph of Moore’s Law. The evolution of computer architecture is reaching molecular limits, and the end result of each new generation of chips is showing less dramatic improvements in performance, power reduction and density.
The Future of Moore’s Law
Some technologists believe Moore’s Law is coming to an end and will be dead within a few decades. They project that computer architecture will reach its limits because transistors won’t be able to operate in the increasingly higher temperatures produced by smaller circuits. Keeping the tiny transistors cool will end up taking more energy than the voltage running through them.
Other experts believe this might not mean the end of Moore’s Law but rather just a dramatic change in the way we keep pace with it. Moore’s Law might not be dead after all; it might just be evolving from one kind of complexity to another. An increasing number of computer scientists believe that as we reach the limits of scale complexity, innovations in systemic complexity will become the main technology keeping Moore’s Law on track.
For decades, the roadblocks to progress were mainly the physical size of transistors. The semiconductor industry evaded these by inventing smaller, more complex designs that could corral electrons better.
Once chips reached the nanometer domain, the price of production started to become an issue. The visible light wavelengths formerly used to carve the silicon features became too thick for maximum precision, so the industry invented new methods of lithography involving ultraviolet radiation. These new methods allowed the march toward ever tinier to continue, but they weren’t cheap. Economists estimate that the price of the research allowing us to uphold Moore’s Law has jumped by a factor of 18 since the 1970s. Fortunately, this has been offset so far by sheer production numbers.
What keeps Moore’s Law going isn’t one specific thing but rather around 100 different variables, each with its own limit and cost-to-benefit ratio. As advances in scaling slow down, the industry has been able to keep the exponential growth going by focusing on improving some of the other variables. Systemic hardware complexity is one of the most promising of these, which is the idea of finding new ways to integrate transistors into chips, such as 3D architecture.
Another interesting variable is systemic software complexity, the idea of focusing on cutting down bloated software to get more out of existing microchips. When chips were constantly improving at a steady rate, the code that ran on them didn’t need to be as efficient as possible. Programmers also often neglected to use new hardware designs like multiple cores to their greatest potential. For example, researchers were recently able to take a typical piece of Python code and cut its running time from seven hours to 0.41 seconds by customizing it to fully take advantage of a new chip design with an 18-core processor.
Even if we only stick to current methods, however, experts at Intel believe they’ll be able to keep pace with Moore’s Law for at least the next 10 years. If the best current chips hold around 50 billion transistors and that number continues to double every two years, in 10 years the math comes to around 1.6 trillion transistors per chip. That’s an improvement of around 3200% over today’s processing power.
We’ve got plenty more articles about computer components for you to read!
- Cathode Ray Tube Explained – Everything You Need To Know. In its day, the CRT was know as the most complicated and advanced piece of consumer technology.
- Integrated Circuit (IC) Explained — Everything You Need to Know. Find out more about circuits in this article.
- Printed Circuit Board Explained — Everything You Need To Know. These were first invented way back in 1925!