Semiconductor Advancement: Is Moore’s Law Finally Dead?

Semiconductor progress continues to advance, but is a slowdown inevitable? Opinions differ.

John Edwards, Technology Journalist & Author

February 13, 2024

4 Min Read
Silicon Wafers and Microcircuits,slice of semiconductor material, used in electronics for the fabrication of integrated circuits.
wu kailiang via Alamy Stock

According to the IEEE’s IRDS Technical Community, over 100 billion integrated circuits are used daily internationally. The demand for ICs continues to grow in large part due to advancements in the rapidly developing markets for artificial intelligence, autonomous vehicles, and the Internet of Things (IoT).

So far, the semiconductor industry has managed to build increasingly powerful integrated devices, allowing electronic innovations to develop at a staggering rate. But can such progress be sustained, or is a slowdown inevitable?

Promising Advancements

Semiconductor companies will continue working toward achieving greater chip power efficiency as well as developing AI chips tailored for specific applications, predicts Syed Alam, global semiconductor lead at business advisory firm Accenture, in an email interview. “Such chips will allow more efficient processing of AI-related tasks,” he notes.

Alam adds that advancements in chiplet design and high numerical aperture extreme ultraviolet lithography (high NA EUV) will also contribute to continued success in making semiconductors ever smaller.

The semiconductor industry is likely to see several advancements in the coming years as it strives to achieve its goal of reaching $1 trillion in revenue by 2030, predicts Wayne Rickard, CEO of next-generation materials and processes firm Terecircuits, via email. He observes that a significant trend is the continued miniaturization of components, with producers pushing existing boundaries to develop smaller and more advanced fabrication processes.

Related:How New Chip Innovations Will Drive IT

Taiwan-based firm TSMC, for example, has already achieved a two-nanometer process, entering the angstrom range and approaching the size of a silicon atom, Rickard explains. “However, the diminishing returns and increasing costs associated with building fabs at such small nodes are becoming apparent,” he says. To address this challenge, the semiconductor industry is exploring alternative approaches to design and manufacturing. “One key strategy involves the disaggregation of functions, recognizing that not all components need to be on the latest, smallest process node for optimal performance.”

Device specialization is also transforming the semiconductor industry. “This includes componentry for processing, such as CPUs, GPUs and TPUs,” says Pete Hazen, corporate vice president of Microchip Technology’s data center solutions business unit in an email interview. “We also expect to see advancements with different approaches to power generation, through utilizing materials such as silicon carbide and gallium nitride.”

Related:31 Underdog Tech Hubs Now in Running for CHIPS Funding

Another promising advancement, Hazen says, are multi-component systems, designed for applications such as heterogeneous computing, systems that use more than one kind of processor or core. “We expect new approaches, for example, with the Universal Chiplet Interconnect Express standard, to enable increased modularity, customization, and scalability,” he notes. “We will also see the implementation of new standards to drive continued improvements in latency and bandwidth, as well as with increased efficiency through disaggregation of memory, such as through utilization of the Compute Express Link (CXL) specification for data centers.”

Breaking Moore’s Law

Moore’s Law, postulated by Intel co-founder Gordon Moore in an article published in Electronics Magazine (April 19, 1965), is the observation that the number of transistors in an integrated circuit will double in about every two years.

Discussion surrounding Moore’s Law’s possible demise is gaining traction, particularly as the effects of transistor scaling diminish, Rickard says. “Many industry experts predict a shift in the traditional trajectory of Moore’s Law, not necessarily its outright end,” he observes. “The incremental benefits traditionally derived from transistor scaling are dwindling, prompting a reevaluation of semiconductor approaches.”

Related:As Tech Layoffs Continue, Chip Foundry Plans Become Vital

Alam believes that Moore’s Law, at least in its traditional sense of die shrinkage, may approach its physical limitations within the next eight to 12 years. “But we currently continue to follow Moore’s law in the broader sense, utilizing chiplets and advanced packaging for improved power performance.”

Final Thoughts

The semiconductor hardware industry is at an inflection point, claims Deep Jariwala, an associate engineering professor at University of Pennsylvania in an email interview. “These inflection points normally happen when either the hardware or the software is trying to beat the other as the driver of the industry.”

Looking forward, Jariwala believes that the most advanced software AI will be limited by current hardware. “Even if you conceptually create and code the most amazing LLM or AI model, if you don’t have the right hardware -- good luck running it,” he says.

The next one to two decades will be very exciting and, perhaps in some ways, unpredictable, Jariwala says. “We know for sure that semiconductors will still be the dominant technology for computing hardware, but silicon might not remain as the sole semiconductor in play,” he notes. “Likewise, various new flavors may get added to standard silicon electronic hardware, such as photonic chips, neuromorphic chips, compute in memory chips, etcetera.”

Read more about:

Chip Shortage

About the Author

John Edwards

Technology Journalist & Author

John Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights