May 1st, 2009 10:56 am
Posted by Steve Conway
Tags: HPC, Intel Cluster Ready, Intel Xeon processors, ISV, LS-DYNA, Nehalem, QPI, QuickPath Interconnect, Xeon
An intriguing aspect of the Intel Cluster Ready (ICR) program is the impressive stable of independent software vendors (ISVs) serving the high performance computing market that have become members. Nearly all of them are mainstream rather than non-established players. The list reads like a who's who of ISVs serving the HPC market and covers a wide range of vertical segments and computational modeling methods.
There's LS-DYNA, used for finite element analysis in the automotive, aerospace, military, manufacturing, and biosciences segments; Acclerys' Discovery Studio and Materials Studio; and a raft of key fluid flow and structures applications, including RADIOSS, Ansys, STAR-CD, FLOW-3D, various flavors of Nastran and Abaqus, and others.
Making key ISV applications available in conjunction with the ICR pre-integrated, pre-tested reference architecture is an important advance, especially for new and less-experienced HPC users. IDC research studies conducted for the Council on Competitiveness and other parties confirmed that one of the biggest barriers to HPC adoption by desktop users was uncertainty about whether the third-party apps they were using would run on clusters. Many of the surveyed desktop users were small engineering services firms, tier 2 and 3 suppliers to large automotive, aerospace, and other manufacturing firms. The vendors of the most popular third-party applications are on the ICR member list. This means that ICR could become a powerful catalyst for new HPC adoption.
But that's not all. Many engineering applications, particularly codes used for structural and fluid-structures analysis, are low scaling and communications-intensive. They require lots of bandwidth at the core and node level. The new Nehalem-based Xeon 5500 series processors are designed to boost the per-core bandwidth that has been greatly needed to advance per-code and per-node performance on applications like these. And for applications that scale beyond a node, Intel's QuickPath Interconnect (QPI) that is part of the Nehalem microarchitecture better supports interconnects that provide high system-wide (bisection) bandwidth.
In tandem, ICR and Nehalem promise to be a winning combination for meeting the "ease-of-everything" requirements of new and less-experienced HPC users (not to mention higher-end users). Here's how the symbiosis works: ICR is designed to make HPC clusters, which are notoriously tough to manage, substantially more tractable; combined with the new Nehalem-based Xeon 5500 processors that provide the bandwidth muscle to boost performance substantially, even on low-scaling engineering codes; and then add in ICR that delivers the Xeon 5500 with its enhanced bandwidth in a pre-integrated, pre-tested form.
From an administration standpoint clusters can be wild beasts. ICR is designed to tame them in a way that might make Siegfried and Roy proud. On the processor side of the symbiosis, early benchmark results for the Xeon 5500 series have been impressive. It will be interesting to see the outcomes when Intel has had additional time to benchmark a broader set of HPC applications.
JOIN THE CONVERSATION
You must be a Registered Member in order to comment on Cluster Connection posts.
Members enjoy the ability to take an active role in the conversations that are shaping the HPC community. Members can participate in forum discussions and post comments to a wide range of HPC-related topics. Share your challenges, insights and ideas right now.
Login Register Now