Monday, February 02, 2009

Cores, cores, cores...

Andy Patrizio wrote a nice story for internetnews.com last week, titled "Gartner: Too Many Chips Spoil the Server". The article discusses the struggle for software to make use of all the parallelism that is on the cusp of being delivered by the CPU crowd in the form of massive multicore.

Patrizio writes:

With the number of chips per server and cores per chip increasing, future generations of servers may end up with way more processing power than the computer could possibly utilize, even under virtualization, Gartner has found. The research firm issued a report on the issue earlier this week.

The article then goes on to detail the fact that today's software systems might have trouble scaling in a massively multicore environment. Because of this, total processor utilization of all the CPUs in a system may start to drop.

Simply, programmers have relied on the chip boys to delivering chips that execute a single instruction stream faster and faster, from the 1980s up through 2005 or so. They wrote linear code and the chips simply executed that instruction stream faster and faster. These days, the chip boys are reneging on that "promise" and are having to go parallel. While Moore's Law is still in effect (transistor count keeps doubling), we're finding it harder and harder to push up the clock speeds. The result is that everything is going multicore in a big way, and the software folks haven't figured out how to deal with this, yet. The challenge is to find and exploit the parallelism in the software and make use of all the cores. But there is a lot of existing code that doesn't work this way and the tools and techniques for doing this easily or automatically are primitive today.

From Vyatta's point of view, this multicore explosion is a fine development. There are some problems that are naturally parallel; networking is one of them. While you have to make sure that the overall system is balanced to be able to feed the processor (the other data paths have to be fast enough to deliver data when needed), the more processing cores, the better. There are relatively few data dependencies in networking. There are some, particularly for stateful flow tracking, but those dependencies exist temporally, between packets in the same flow; each flow can typically be processed independently of the others.

What this means is that you can expect Vyatta to get better and better in the world of multi-core. And we won't have to re-engineer things greatly to take advantage of this. Unlike software for other applications, networking is in the "sweet spot" for where the world is headed.

Bring on the cores!

0 Comments:

Post a Comment

<< Home