Applications on a Common Cluster Platform Architecture

September 27th, 2010 2:57 pm
Posted by Brock Taylor
Tags: , , , ,

Applications drive cluster purchases - or rather the specific workloads for applications drive cluster purchases.

Ultimately, it's about solving some problem. The size of the problem can dictate needed features or characteristics in a cluster.  It's a logical tendency, then, to take an application and build a cluster from scratch that meets the needs for that specific application. The problem is that this implies customized clusters every time.

A common cluster platform architecture allows solution designs that are flexible to support a wide spectrum of applications, while still providing the appropriate hooks to adapt to the specific needs of the application workloads.

What does that mean for the broader span of would-be cluster purchasers? That decisions of what to buy converge more towards the vendor's quality of solution, plus the configuration needs of the workloads. Then the purchaser doesn't have to understand as much about how the system works under the hood.

The relationship is very important between the cluster solutions vendor, the cluster purchaser, and the application vendor. Knowing how faster processors, huge amounts of memory, InfiniBand interconnect, or twice the number of compute nodes will reap benefits is a non-trivial question. Expecting that cluster purchasers will know the answers themselves, especially those just coming into using HPC, is not feasible. The applications vendors need to help provide the guidance here, allowing cluster solutions vendors to provide the tunable configurations for cluster purchasers to select what fits their computing needs, and budget.

To ease the purchase process, many solutions vendors and applications vendors are using Intel Cluster Ready architecture to form partnerships.  Configuration guides from the applications vendor can help point purchasers to suggested solutions based on the size of workload. Then the solutions vendor offers systems that meets the need of the workload size. Together they provide fully specified solutions for the purchaser to select.

Applications built from source are a bit tougher because of the difficulty in forming a relationship to solutions vendors, but it's still possible to achieve the same goal. The application source code can provide a build environment for the common architecture. The source code maintainers can provide guidance on what software tools are needed or desired and how solution components affect performance of workloads. This in turn allows solutions vendors to match offerings to the needs of the application.

How fast the use of HPC expands will partially depend on how easy it becomes for new users to match solutions and applications. Building from scratch every time will dampen that expansion. New users will need to have guidance from both the cluster solutions vendor and the applications vendor or source code maintainer to make it easier and more attractive to jump into HPC.

What do you think?

>> See more Related Stories, and check out the discussion at:

Cluster buyer vs. seller: do-it-yourself builds


You must be a Registered Member in order to comment on Cluster Connection posts.

Members enjoy the ability to take an active role in the conversations that are shaping the HPC community. Members can participate in forum discussions and post comments to a wide range of HPC-related topics. Share your challenges, insights and ideas right now.

Login     Register Now

Author Info
Brock Taylor

Brock Taylor is an Engineering Manager and Cluster Solutions Architect for volume High Performance Compute clusters in the Software and Services Group at Intel. He has been a part of the Intel® Cluster Ready program from the start, is a co-author of the specification, and launched the first reference implementations of Intel Cluster Ready certified solutions.

Brock and others at Intel are working within the HPC community to enable advances and innovations in scientific computing by lowering the barriers to clustered solutions.

Brock joined Intel in December of 2000, and in addition to HPC clustering, he previously helped launch new processors and chipsets as part of an enterprise validation BIOS team. Brock has a B.S. in Computer Engineering from Rose-Hulman Institute of Technology and an M.Sc. in High Performance Computing from Trinity College Dublin.