IT Budget Management Drives Cluster Management Innovation

May 25th, 2009 8:33 am
Posted by Gary Tyreman
Tags: , , , , , ,

Years ago I subscribed to an email list that has been routinely filling up my personal email account on a daily basis (I honestly forget the source, or what I was thinking at the time). With nothing else to do at airports lately, except waiting for delayed planes (which seems to have become a regular occurrence) I started reading some of them – have Blackberry will read!  One from last year that I happened to open grunted: “What Bad Economy? IT Spend Will Grow This Year.”

Lately, I have been paying a lot of attention to the economy watching for signals of any effect on high performance computing spending. The article was based on a report from Gartner in 2008, which in principle flew in the face of the market reality we are seeing, so I pressed on and looked into the story behind the story.

So I pressed on…

“Gartner found spending even in times of economic uncertainty is supported by two factors: businesses are investing in improvements to internal processes aimed at reducing costs, along with their own innovations, and that globalization allows IT services providers to mitigate the risk of weakening demand by operating in more markets.”

And there it was. This reflects exactly what we are hearing from customers – it’s all about “improvements to internal processes aimed at reducing costs.”

To start, let’s frame the circumstance. First and second generation high performance compute infrastructures were built by IT as a service to engineering, research or science. The systems replaced manual processes and were thought of as a means to an end: more computing power. The construct was “throughput”– that is, simply a way to crunch more data in less time. The economic benefit of these generations, usually “get to market faster,” was fairly straightforward.

Over time, improved software modeling developments made possible the ability to simulate more business processes, and that in turn fueled the need for more  memory and compute power. The ability to use COTS components contributed to an explosion in use, size of problems solved and the number of computers in a cluster (also known as a ‘farm’).

After years of unprecedented growth, HPC has evolved from an engineering tool or asset to a far more lofty status including the fastest growing segment of the IT industry in server shipments, roughly one quarter of all CPU shipments; and an invaluable rank in the research, innovation and product development chain of most of the world’s leading companies in virtually every industry.

HPC  underpins product innovation

Having become linked to an organization’s value chain, like the personal computer many years before it, HPC environments are increasingly being viewed differently than in the past. No longer solely the pet projects of visionary CxOs, compute clusters have become a strategic element of the product or service and a competitive advantage. High performance computing has developed from an asset to a value contributor by increasing operating margin of product development through time or efficiency gains and by helping revenue growth through product and service innovation.

That would suggest that the infrastructure has become as visible as other key elements in the value chain, such as manufacturing. That visibility will incur regular lifecycle management inspection as part of business process improvement projects linked to aligning IT spending to corporate goals. Moreover, a greater amount of efficiency and value will be sought from such a strategic ‘asset’. (Similar to how Dell continuously seeks to improve its manufacturing process to improve margins)

Perhaps I should be blunt here. High performance computing is a strategic asset of most (if not all) of the organizations that have deployed it. As such, these organizations have recognized the direct link between investing in the infrastructure and the business rewards (spend more, get more). This is somewhat counter-intuitive to business systems IT where the goal is taking the cost out of the infrastructure.  Thus, processes will need to be developed and improved and a professional and commercial approach will ultimately need to prevail. And it is these developments that will drive requirements back into the ecosystem that seeks to sell solutions to this community.

Seeking Efficiency

So where will this efficiency come from? With the explosion of size and sheer numbers of computers used in a cluster, many organizations have been forced to adapt existing processes or create new processes, including the codifying of workflow, script wrappers and run books. This has only increased the complexity of the environment.  Costs can be driven out of clusters through the development of a sustainable growth model that considers size, complexity, pricing paradigm and inclusiveness. Additional reductions can be realized by employing a systems management software stack that considers the holistic environment and not simply a single aspect of its use. Often, the server:admin ratio can be increased significantly if the maintenance of the system can be offloaded to automation.

Conclusion

Professional management of IT budgets focuses and prioritizes projects on the creation of shareholder value. HPC projects, as we have described, are clearly in this category. However, budget management processes will always seek improvements and efficiencies. IT organizations are reacting and clearly have begun to dictate a new set of requirements to their vendors. The success of the Intel Cluster Ready program is a prime example of this.

I find it amusing if not ironic that organizations have come to expect innovation and more automation from the very software that manages the computing environment that enables their innovation.

An obvious recommendation would be to look for clear signs of innovation from prospective vendors. As vendors implement solutions at different sites their product should reflect the best-of-class ideas of how to manage a cluster. Each site will have very insightful methods or ideas that should be included and made available to a broader set of users. This will allow one to benefit from the ‘network effect’ of improvements in features and functionality across the entire systems management stack.

This ‘network effect’ will drive the most efficiency into the lifecycle management of the cluster. These improvements should allow IT staff to focus on more valuable and strategic projects instead of forcing them to individually learn the same ‘tricks’ and lessons as other organizations every time they push the physical or logical boundary of a system component.

Through this process HPC infrastructures will increase value, become simpler, and IT staff will be able to focus on big picture and interesting business problems. The days of tinkering and DIY may be on the decline.

JOIN THE CONVERSATION


You must be a Registered Member in order to comment on Cluster Connection posts.

Members enjoy the ability to take an active role in the conversations that are shaping the HPC community. Members can participate in forum discussions and post comments to a wide range of HPC-related topics. Share your challenges, insights and ideas right now.

Login     Register Now


Author Info
Gary Tyreman


Gary Tyreman brings more than 20 years of executive software experience to his role as the President and CEO of Univa Corporation. Gary leads corporate development and fundraising activities and is the architect of Univa's data center optimization strategy, which couples the strategic addition of Grid Engine expertise with Univa's innovative and industry-leading integrated cloud computing management products. Gary has established Univa as a top multi-national competitor and has expanded the markets the company serves. Prior to taking the position as CEO, Gary spent three years as Univa's Senior Vice President of Products and Alliances.

At Univa UD, Gary is Vice President and General Manager of the High-Performance Computing Division. In this role he oversees all aspects of the company's HPC business, including strategic planning, engineering, marketing, sales and business development. He also directs the growth of the company's online open source community.

Prior to joining Univa UD in 2008, Gary was Vice President and Business Manager for Platform Computing HPC division. During nearly five years there, he led the company's business planning, innovation and product management efforts while marshaling a team that developed some of the industry's most popular software.

Tyreman was among the first in the industry to recognize the emerging entry-level user in the HPC space and was responsible for developing a vision for how to simplify running applications off the shelf, a key to unlocking value among organizations new to HPC. He worked with Intel Corp. to develop his innovations, which were taken into account when Intel announced the Intel Cluster Ready program last year, making it easier to design, build, sell, program, acquire and deploy clusters built with Intel components.

Prior to his tenure at Platform Computing, Tyreman held a variety of executive positions in product management and marketing in technology growth companies, including Hummingbird, Delano and Itemus.

Gary is actively involved in the standards community and has held key positions in the X Consortium (X.org) and Open Grid Forum.