Google's Urs Holzle: Moore's Law Is Ending - InformationWeek
IoT
IoT
Cloud
News
11/10/2016
01:05 PM
Connect Directly
Twitter
RSS
E-Mail
50%
50%

Google's Urs Holzle: Moore's Law Is Ending

Google's Urs Holzle says cloud suppliers won't be able to bank the gains of Moore's Law much longer and will have to eke out advances elsewhere.

Gartner's 10 Tech Predictions That Will Change IT
Gartner's 10 Tech Predictions That Will Change IT
(Click image for larger view and slideshow.)

The cloud is currently an assembly of commodity technologies organized on a massive scale. However, it is on its way to becoming a mix of commodity and advanced, specialized technologies in order to sustain its ability to offer leading-edge performance.

The cloud needs more advanced, specialized technologies because Moore's Law is running out of steam, according to Google's Urs Holzle, the company's senior vice president for technical infrastructure and Google Fellow. The demise of Moore's Law, which once decreed that the number of transistors on a chip would double every 18 months, will occur sometime in 2021, according to the IEEE's Spectrum publication.

The growth in CPU power since Gordon Moore first announced his law in 1965 is what's allowed Google and other handlers of big data to continually improve performance.

(Image: serg3d/iStockphoto)

(Image: serg3d/iStockphoto)

However, that free ride cannot be relied upon forever.

If the growth curve of computing begins to level off, neither Google nor enterprise IT can allow data to keep increasing two to five times a year without experiencing increasing costs. If anything, machine learning, artificial intelligence, and business analytics will require a constantly expanding availability of compute without a matching increase in cost.

That was a vision that Holzle tried to convey to an audience at the Structure 2016 event, held Nov. 8 and 9 in San Francisco. Holzle, the first guest speaker at the conference, discussed the issue with Nicole Hemsoth, co-editor of The Next Platform, an online news source about high-performance computing.

"A surprising number of customers have a need for large-scale computation," he said. Suppliers, such as Google's Compute Platform, need to evolve to ensure that that capability is available, without the changes proving disruptive to end-users. The cloud supplier is in a better position to weave in new technology than every enterprise that's trying to solve an issue by itself.

"In the cloud, it's easier to insert new technology," he said. Some of the gains Google is considering will increase performance by 30%, instead of the 100% achieved by Moore's Law. But Holzle said Google must take the gains where it can find them.

"Infrastructure is one of those things, if you do it right, nobody cares about it," said Holzle. But getting it right, as the significance of Moore's Law fades, will be increasingly a challenge.

"Moore's Law is a problem for IT as well," he noted. IT inevitably has a growing amount of data and workloads. If the bill to process them grows at the same rate as the workloads, it will risk putting many enterprises out of business. "That gets people in trouble."

Greater use of flash memory and using the data movement within a server allowed by the OpenCAPI standard are two ways by which cloud suppliers will keep expanding the ability to run compute-intensive workloads.

The Open Coherent Processor Interface sits atop the PCI Express bus and moves data within the server at a speed of 25 Gbps. Its privileged position allows it to operate at the speed of random access memory, so expanding the capacity and speed of operation of the server.

Dell, HP Enterprise, IBM, and Google are all backers of the new standard. Google and Rackspace have expressed interest in buying next-generation Power9 chips that support the OpenCAPI standard for their cloud operations. Servers equipped with graphical processing units and CPUs could sit on the PCI Express bus and move data around the server at high rates of speed.

Google Fellow Urs Holzle 
(Image: Google)

Google Fellow Urs Holzle

(Image: Google)

Whether Google and Rackspace would use them for general purpose infrastructure or for data-intensive workloads isn't known at this time.

IBM is designing servers around Power9 that are due out sometime in 2017. It sold its Power chipmaking business to GlobalFoundries in 2014.

[Want to see how AWS relies on software-defined networking? Read Google's Infrastructure Chief Talks SDN.]

Also at Structure 2016, Facebook's Jay Parikh said the social media company has contributed the plans for its Backpack switch to the Open Compute Project. Backpack is a modular design that combines switch "elements" to scales from 40 Gbs up to 100 Gbs. It is considered a second generation in Open Compute's switch-creation effort.

"The Open Compute Foundation has received the Backpack specification," said Jay Parikh, Facebook's vice president of engineering, at the event Nov. 9. Facebook has used its Wedge and other switch designs in the construction of its own data centers. Equinix has also started adopting OpenCompute switches in its carrier-neutral cloud data centers.

The Open Compute Project makes hardware specifications available for anyone to use. Several large financial services firms, such as Goldman Sachs and CapitalOne, have been equipping their data centers with servers, racks, and switches that follow its specifications.

Charles Babcock is an editor-at-large for InformationWeek and author of Management Strategies for the Cloud Revolution, a McGraw-Hill book. He is the former editor-in-chief of Digital News, former software editor of Computerworld and former technology editor of Interactive ... View Full Bio

We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
melgross
50%
50%
melgross,
User Rank: Ninja
11/30/2016 | 11:56:23 AM
Re: Other areas of opportunity
A nice thought. But if we see 10% performance increases in software every 18 months, I'd be more than happy. There's only so much you can improve software. After a point, it's almost impossible. Software has depended upon hardware improvements throughout computing history. This will be a shock to the software industry. No more will they be able to just throw half baked and not really needed features in to sell an upgrade. By the mid 2020's, it may become very difficult to sell upgrades, and software subscriptions that are sold with the implied promise of continuing improvements will also become a hard sell. Why pay every year when there's nothing really useful coming down the pipe?
melgross
50%
50%
melgross,
User Rank: Ninja
11/24/2016 | 10:09:34 AM
Re: We no longer refer to "warehouse-scale"
I would be amazed if his 30% a year comes true after the first couple years, or so. Once the low hanging fruit is picked, the rest will be much harder to achieve. As I keep reminding people, it's not yet assured that we will be able to go to 5nm. That's just a hope right now, as some of the methods we're now using won't work at that size, and replacement technologies haven't been found yet. If 7nm is the practical limit, we're going to reach the end before 2021. After that, it will be a matter of finding more and more clever ways around designs. We can see from Intel, that eking out more performance is difficult. Others have found even more of a problem at the same nodes. More cores aren't a solution either. It will be an interesting decade.
moarsauce123
50%
50%
moarsauce123,
User Rank: Ninja
11/21/2016 | 7:43:19 AM
Other areas of opportunity
For the longest time the solution to performance issues was to throw more memory and processors at the job. With that being less of an option we need to shift to two areas of opportunity:

- faster and more reliable networking

- application performance and footprint

Better networking already exists, except that the US refuses to spend the money on better infrastructure. There is not enough competition in the market, especially at the consumer level. There is typically one provider for networking services and that provider milks the outdated infrastructure as much as possible with high access fees. Great for their business, but overall a massive detractor to distributed computing and cloud services.

I see more chance in optimizing applications for performance and footprint. Developers need to go back to the idea that they are coding on a Commodore 64. Memory is a premium, disk access is slow, and networking has huge latency if it is available at all. Code within these confinements and get crafty. The FOSS folks at times go into that direction while companies like Microsoft go the other way. The footprint of Windows without any apps installed is huge. Yes, there are versions that are very lean, but they drop us back into the stoneage of command lines. Others can do better and with a GUI.

Moore's Law needs to applied to software: every 18 months make your apps run twice as well using the same amount of resources as before.
GIGABOB
50%
50%
GIGABOB,
User Rank: Strategist
11/14/2016 | 2:14:48 PM
Moore's Law is actually dead - but is can be resuscitated
Moore's has really been a guideline touting an economic benefit accruing by physically shrinking IC feature widths.  As such, that guideline, halving of costs as device densities double is effectively dead as the costs to drive device density have skyrocketed.  A basic fab is now $10B, up from $1B a decade ago.  The Tools to populate the fab have not changed radically in over a decade as EUV scanners keep being another generation away.  

The potential resuscitation for the Data Center will come as FPGA based controllers provide intelligence to manage scaled fabric in the data center for routing packets across server clusters.

But until we get off silicon - it is questionalble when a path to cost reductions or density increases will continue.  It appears the path forward is for many to step back and reduce costs using "good enough" older fabs that are fully depreciated.  Looks like a longer period of status in process technology while we look at changes in architecture.
Charlie Babcock
50%
50%
Charlie Babcock,
User Rank: Author
11/10/2016 | 2:44:54 PM
We no longer refer to "warehouse-scale"
Holzle is of course the co-author of one of the most important early cloud documents, The Data Center as a Computer: Introduction to the Design of Warehouse-Scale Machines, sometimes shortened to The Data Center Is the Computer
2018 State of the Cloud
2018 State of the Cloud
Cloud adoption is growing, but how are organizations taking advantage of it? Interop ITX and InformationWeek surveyed technology decision-makers to find out, read this report to discover what they had to say!
Commentary
AI & Machine Learning: An Enterprise Guide
James M. Connolly, Executive Managing Editor, InformationWeekEditor in Chief,  9/27/2018
Commentary
How to Retain Your Best IT Workers
John Edwards, Technology Journalist & Author,  9/26/2018
Slideshows
10 Highest-Paying IT Job Skills
Cynthia Harvey, Contributor, NetworkComputing,  9/12/2018
Register for InformationWeek Newsletters
Video
Current Issue
The Next Generation of IT Support
The workforce is changing as businesses become global and technology erodes geographical and physical barriers.IT organizations are critical to enabling this transition and can utilize next-generation tools and strategies to provide world-class support regardless of location, platform or device
White Papers
Slideshows
Twitter Feed
Sponsored Live Streaming Video
Everything You've Been Told About Mobility Is Wrong
Attend this video symposium with Sean Wisdom, Global Director of Mobility Solutions, and learn about how you can harness powerful new products to mobilize your business potential.
Sponsored Video
Flash Poll