Interconnects: 10GigE and InfiniBand

July 8th, 2009 10:29 am
Posted by Douglas Eadline
Tags: , , ,

It is two horse race, but one horse is still in the barn

Ask anyone, "What are the two choices for HPC interconnects?"  and they will tell you "InfiniBand and 10 Gigabit Ethernet (10 GigE)." For the most part they are correct, but 10GigE is just entering the HPC market. It has not even landed in the Top500 arena, although users are still confident it will show up soon. On the other hand, InfiniBand use is climbing steadily.

The following table shows the June 2008 and 2009 interconnect families on the Top500 list.
I grouped any interconnect that had less than 1% share in 2009 into the "other" category.

Interconnect 6/2008 6/2009
Myrinet 2.4 2.0
GigE 56.6 56.4
IB 24.2 30.2
Proprietary 8.2 8.4
Other 8.6 3.0

The first thing to notice is that GigE still dominates the list with a 56% share as it did a year ago. Also of note, number 16 on the list, from University of Toronto, used Quad Xeon E55xx and GigE! The only other big change is the continued growth of InfiniBand (from 24% to 30%) and decrease of the "other" category.

So why the confidence in 10 GigE? Simple, at one point GigE was as expensive as 10 GigE is today, but due to the commodity uptake, the price came down to the "it is free on the motherboard" option. Plus, many users like the "plug and play" nature of Ethernet as it is well understood technology.

In closing, the Top500 is a single benchmark and not the only measure of HPC interconnects, but it does provide an interesting snapshot of what people are using. Right now it seem the biggest competitor to 10 GigE might be GigE.

JOIN THE CONVERSATION


You must be a Registered Member in order to comment on Cluster Connection posts.

Members enjoy the ability to take an active role in the conversations that are shaping the HPC community. Members can participate in forum discussions and post comments to a wide range of HPC-related topics. Share your challenges, insights and ideas right now.

Login     Register Now


Author Info


Dr. Douglas Eadline has worked with parallel computers since 1988 (anyone remember the Inmos Transputer?). After co-authoring the original Beowulf How-To, he continued to write extensively about Linux HPC Clustering and parallel software issues. Much of Doug's early experience has been in software tools and and application performance. He has been building and using Linux clusters since 1995. Doug holds a Ph.D. in Chemistry from Lehigh University.