W Power 2024

Nvidia Builds A Dream Machine

Give Steve Scott 18 zeroes and a lot of electricity and he can change the world

Published: Mar 15, 2012 06:43:23 AM IST
Updated: Mar 7, 2012 03:45:32 PM IST
Nvidia Builds A Dream Machine
Image: Thomas Strand for Forbes

Steve Scott wants to build a supercomputer that can calculate 1 quintillion floating-point operations per second. That’s a one followed by 18 zeroes or, in computer speak, an exaflop. Such a machine would be a billion times faster than a MacBook Air and could design wildly efficient combustion engines, simulate the workings of an entire cell and model a clean-burning fusion reactor. With enough zeroes Steve Scott can change the world.

The reason no one has built an exascale computer yet is the electric bill. An exaflop machine using today’s standard x86 processors would draw 2 gigawatts of electricity, the maximum output of the Hoover Dam. The biggest supercomputer ever built handles 11 quadrillion flops and draws 13 megawatts, the juice of nine wind turbines. Scott, one of the world’s leading supercomputing engineers, sees a day coming when we can have computers a thousand times faster than that, without using that much more power.

The prospect has brought Scott, 45, to Nvidia, the Santa Clara, California company that is the world’s largest maker of graphics-processing units, or GPUs. Nvidia chips are also prized by the supercomputer community because they can handle six to eight times more operations per unit of energy than an Intel chip. Lash together thousands of them and you get a power-sipping supercomputer.

Scott’s last job was as chief technology officer of supercomputer manufacturer Cray. In the spring of 2009 Intel pulled out of a joint effort to build supercomputer processors with Cray. “I was definitely disappointed,” says Scott, but he doesn’t blame Intel. “The high performance computing market just isn’t big enough to support the development of competitive processors.”

It’s a humbling admission for Scott, a 19-year veteran of Cray with a PhD in computer architecture from the University of Wisconsin and 27 patents to his name. “Scott is at one of those interesting intersections,” Nvidia Chief Executive Jen-Hsun Huang says. “As a computer architect he’s a geek at heart, and yet he really lives to understand customers and markets.”

Supercomputers are a little more than a third of the $8.6 billion market for high-performance computers, according to IDC, but they are a fast-growing and highly profitable slice that confers great PR to hardware makers.

Sales overall of high-performance computers will rise 56 percent to $13.4 billion in the next three years, according to IDC. Nvidia can keep growing in supercomputers by exploiting its energy-efficient edge. Its chips have dozens of simple cores that tackle lots of repetitive computations at once.

Supercomputer scientists have been pushing the idea of parallel computing for years. It’s more power-efficient than an Intel Core i5. Pairing Intel or AMD chips with Nvidia’s graphics chips in supercomputers results in machines that are three times more efficient than ones that rely on CPUs alone.

Three of the world’s five fastest supercomputers use Nvidia’s processors. In October, the Oak Ridge National Laboratory announced an effort to build the world’s fastest supercomputer, which will use AMD Opteron chips and 18,000 of Nvidia’s GPUs.

The Department of Energy has announced it would like a machine that can hit exascale speeds using just 20 MW of power. The same technology could be used to build machines able to do the work of today’s supercomputers on a much smaller power budget.

“That would allow a small engineering group to do things that today can only be done by a rarefied few,” says Scott. He thinks we can hit that mark by the end of the decade.

(This story appears in the 16 March, 2012 issue of Forbes India. To visit our Archives, click here.)

Post Your Comment
Required
Required, will not be published
All comments are moderated