Dell Case Study: Stanford University

April 1st, 2009 3:27 pm
Posted by Marcel Van Drunen
Tags: , , , , , , , , ,

[Excerpt] Simulations are among the most demanding computational tasks. At Stanford, models working against large data sets can take weeks to complete. Scientists at FPCE never have enough computing power—more is always better. Thus, acquiring the greatest computing power possible within their budget is key to optimizing the results from the group’s work.

Click here to download pdf


You must be a Registered Member in order to comment on Cluster Connection posts.

Members enjoy the ability to take an active role in the conversations that are shaping the HPC community. Members can participate in forum discussions and post comments to a wide range of HPC-related topics. Share your challenges, insights and ideas right now.

Login     Register Now

Author Info

Marcel van Drunen joined the Dell EMEA HPC team in 2008 as a business development manager. His former job at Dell was as a solution consultant, specializing in high-availability and storage. Before joining Dell, Marcel had a number of jobs, including database programmer, analyst and systems architect and administrator. The last six years before joining Dell in 2006, Marcel worked at Unisys, where he was a high-availability consultant and trainer, also running a lab environment where he tested high-availability and clustering solutions. Marcel studied Information Sciences and Mathematics at the University of Delft in the Netherlands. He never got his degree, mainly because his extra-curricular activities took too much time. One of these activities was being a politician, specializing in solving world conflicts.