Q&A: Intel CTO Justin Rattner

In the unexpurgated version of my InformationWeek interview with the chip giant's chief technology officer, Rattner dishes on multicore processors as data-center-class solutions, Internet-wide subnets for cloud security, HPC trickle-down, and building Gigabit-class routers out of standard parts.

Alex Wolfe

November 28, 2009

9 Min Read
Network Computing logo

Intel chief technology officer Justin Rattner got his technology bones in the 1980s, pushing the supercomputing industry from unsustainable, expensive proprietary architectures toward affordable, off-the-shelf microprocessors. He's similarly brought the perspective that advanced research must deliver an economic benefit to his role as head of Intel Labs.

We caught up with Rattner to talk about the disruptive work he's currently championing in envisioning the high-performance system of the future. Rattner offered insight into the impact of multicore processors, how encryption will help secure cloud computing, HPC trickle-down, and the possibility of building Gigabit-class routers out of standard parts.

NetworkComputing: Multicore processing is exploding, as evidenced in designs like Intel's 80-core Tera-Scale prototype processor. Do you think that, ultimately, we will see entire data centers implemented in silicon?

Rattner: How can we use large numbers of relatively simple processors to construct data-center-class solutions, is an important topic for us. We just won best paper prize in the Symposium on Operating System Principles with our collaborators at Carnegie-Mellon on something called FAWN, which stands for "fast arrays of wimpy nodes." It's the idea that, if we could build tomorrow's processors out of arrays of relatively simple cores, we could deliver data-center-class solutions. It would be data centers on chips, and then arrays of those chips.

NetworkComputing: Does this set up a possible race between virtual and physical processors, because with the chips you're talking about, you'll have so many physical cores you won't need virtual instances?

Rattner: I've introduced a new word into my vocabulary, which is physicalization. This is exactly what you're describing. If an individual core is so inexpensive, why go to all the trouble to virtualize it?  Just allocate some number of physical cores to the problem.

What we're also trying to understand is, what leads to the most energy efficient solution? Am I more energy efficient if I take a big core and virtualize it many ways, than I would be if I took lots of simple cores and handed them out as the workloads demanded? I can't tell you what the answer is, but things are looking pretty good for the small cores.

NetworkComputing: You also believe general-purpose processors have a place in networking equipment, correct?

Rattner: We have a project out of our Intel Berkeley Lab called Router Bricks. The whole idea is taking servers and 10-GB Ethernet, and building Gigabit-class routers out of standard parts. [The idea is], what if we could do the bulk of networking using standard server hardware, and essentially routing becomes a software application?

NetworkComputing: So this means that networking hardware, like general-purpose computing hardware, will tilt toward commoditization.

Rattner: Right. I think a lot of people who run big networks and have big network centers, they look at their floors and see a lot of very specialized equipment, which they're clearly paying a premium for. They're asking the question -- How can we move that to high-volume, standards-based servers? So it's definitely an area we're going to continue to work on.
NetworkComputing: The logical inference then is that the big challenge isn't compute power, it's I/O.

Rattner: Exactly. In fact the reason we're even able to have this conversation is, in the new generation of Nehalem processors from Intel, we have incorporated features like the cache architecture, which enable us to be particularly efficient at packet processing.

So we can grab the packet header as it comes in from the NIC, and make sure that it goes directly into cache. We save the processor the cache miss and having to go out to main memory and grab the header, and then take it apart.
Those designs were based on studies of what the specialized equipment does to get performance. When you're able to break it down to the essential functions, you come to realize it's relatively easy to incorporate those into a general-purpose setting.

You don't have to have special-purpose hardware to do it. You just need a few key functions that are implemented at the bare hardware level and then everything else can be implemented in software.
NetworkComputing: You've also explored having the processor handle security. Tell us about that.

Rattner: We have manageability engines (MEs), which underlie our VPro architecture. We use those both for manageability and security. Longer term, we need a general-purpose solution. We need an architectural breakthrough which allows an open platform to selectively and programmatically become closed during a secure computational phase. What we ultimately need is being able to go into stealth mode for brief periods of time and then come back into the open.

As we look beyond the manageability engine, we're researching a general-purpose solution for being able to run high-trust computations on the open platform.

NetworkComputing: This relates to the big issue concerning everyone nowadays -- security in the cloud.

Rattner: We're working with Microsoft and Cisco and some other folks on something we call network enclaves, which is an architecture that allows for dramatically simpler cryptographic key management. It lets you build Internet-wide subnets, which are completely secure. Plus, the IT folks don't have to manage the individual keys, because they're derived from a single, master key associated with the enclave.  It's going to take a few more years to get this to market.

NetworkComputing: Isn't it also the case that the virtualized data center and the faster I/O sows the seeds of its own increased overhead, because now you have thousand of mobile users and vanishing endpoints? So you've got all this speed, but you've got to devote a lot of it to checking stuff; authentication, checking packets, doing a lot of stuff which will slow you down.
Rattner: Well, you're absolutely right. Our approach -- no surprise -- is, what can we move into hardware? If you look beyond Nehalem at Westmere, which is the 32-nm, next-generation processor, it has a new group of instructions which accelerate AES and other encryption standards. We're already seeing factors of three and four improvements in the number of SSL connections you can open per second, as a result of these new instructions.

We're looking at how to bring some of these critical cryptographic functions down to the hardware level, to make sure that they're not, as you were suggesting, a source of a lot of overhead, when so much computing is moving to the cloud and so much of the communication has to be done in a secure fashion.
NetworkComputing: So to sum this up, you see cryptographic keys as becoming the de facto method of securing cloud connections?

Rattner: That's what I was saying. I don't think we have to reinvent cryptography. But we do have to find cryptographic architectures, if I can use that term, that are intellectual tractable -- they're manageable -- yet meet the complex protection needs of this highly distributed computing environment, where my cellphone is just as secure as some data center in the cloud.
NetworkComputing: Will this be based on what NIST is doing with public key encryption?

Rattner: Certainly based on public-key technology, but engineered so that the IT organization doesn't have to manage every individual key, which is just something that's proven impractical.
NetworkComputing: Your roots come from high-performance computing.Talk about HPC trickle down.

Rattner: Actually, if my name is associated with trickle down, let's correct that for the record. Because I'm a believer in trickle up. I was one of the troublemakers back in the 1980s about how microprocessors were going to redefine HPC.
NetworkComputing: But they did.

Rattner: Well, but at the time, people thought I had lost my marbles. Within a few years, people were talking about the attack of the killer microprocessors
NetworkComputing: But that wasn't only a technical battle, that was an economic battle, too.

Rattner: Yeah, that's what I'm saying. I think what we proved consistently is, you have to figure out how to bring the results of high-volume manufacturing and the economies of high-volume manufacturing to HPC, because HPC to date has not represented a big enough segment of the market to justify the kinds of very expensive R&D that the HPC community would like to see.
So while I think it's still a trickle-up story, as we leverage the high-volume technology, I believe  -- and this is really the heart of my keynote at Supercomputing '09 -- that HPC needs a killer application. I believe the killer application is what I talked about a couple of years ago at the Intel Developer Forum, namely the 3D Web or the 3D Net.
Once the Internet experience becomes rooted in high-performance simulation and visualization, the market for what we think of today as high performance computers -- very floating point intensive kinds of problems -- suddenly you're going to have a mass market for that kind of computing and the technology that it demands.

That will finally produce the kind of R&D investment which the HPC community has hoped for for decades.
NetworkComputing: Does this mean that the Web will become a graphical supercomputer and our desktop will be the front end?

Rattner: Well, all of our personal devices will represent the front end. This raises all sorts of interesting questions such as, do I render in the cloud and just send the video out to the clients? Or do I rely on the client capability to do some local amount of rendering on the device?
NetworkComputing: Let's close with some insight into Atom, Intel's netbook processor.

Rattner: We see a very exciting future emerging at the bottom. Most people don't realize how transformational Atom is for Intel. What I'm talking about is Intel's reemergence and resurgence as a supplier of embedded computing solutions. Atom came to market at a time where many of the embedded applications finally had to become Internet enabled. Atom is the perfect processor to do that, because you have immediate access to all the bits and pieces of Internet software, and now you can bring all of that goodness to embedded applications.

Watch, you're going to see transformation in automobiles. What we think of as in-vehicle infotainment is going to be transformed if all that technology is sitting in your dashboard. And mundane fields like signage are going to go completely digital.

NetworkComputing: Will the Mobile Internet Device, the handheld browser envisioned by Intel, succeed in the market?

Rattner: We feel that's there is a screen size bigger than a smartphone and smaller than what we've seen in netbooks -- more of a tablet-like device  -- where you get the full Internet experience. When we did our field trials a few years ago, people didn't want to give these things up.

Follow me on Twitter. What's your take? Let me know, by leaving a comment below or e-mailing me directly at [email protected].



About the Author(s)

Stay informed! Sign up to get expert advice and insight delivered direct to your inbox
More Insights