This week in the Crescent City at the mouth of the Mississippi River, a collection of the country’s foremost members of the research and scientific community are gathering for SC10. This year is the 23rd meeting in the conference’s annual history and it has evolved into a can’t miss event for many in the scientific community.
In advance of SC10, I had the fortune of meeting with Bill Hamilton, director of technical computing for Microsoft and resident expert for Microsoft’s HPC Server products. We engaged in a wide-ranging discussion including the commercial adoption of High Performance Computing (HPC), the relationship between HPC and cloud computing, and trends around data intensive computing. Bill shared some extremely enlightened perspectives which I plan to share on the Syncsort blog in this post and others in the days ahead.
HPC has been in existence since the advent of the computer. One of the key tenants of HPC is parallelism: maximizing computational resources in parallel for analysis, research and modeling. Historically, securing HPC centers and resources have been prohibitively expensive and available only to the very few in the scientific community. Bill told me that there are at least 70 million scientists, engineers and business analysts who would greatly benefit from access to an HPC environment. Of those 70 million, only 1 million today have direct access (comprising 80% of HPC utilization). Another 14 million or so pick up the balance 20%. This leaves 55 million (a staggering figure) without access to an HPC environment and presents an incredible opportunity.
Microsoft is working with the HPC community to advance the availability of these resources by driving these HPC environments to today’s commodity off-the-shelf clusters thereby reducing the cost of access and participation significantly. In addition, Bill commented that the combination of cloud and HPC services will drive another doubling of accessibility with the affordable “pay as you use” cloud model. Bill drew a smart comparison to the initial doubling effect that occurred with the migration away from the massive parallel processing (MPP) mainframes to Linux Beowulf clusters more than a decade ago.
It was also interesting to hear Bill discuss how HPC has become a strategic asset today. Because of the competitive advantage it offers, you may not hear about HPC as often from industry as you do from the research community. Rest assured it is now heavily in use in financial services, the insurance industry, oil and gas, and manufacturing to name just a few.
My personal favorite, however, was when Bill explained how HPC contributes to the field of digital content creation. Bill identified the efforts of Pixar in the development of Toy Story (a 114,000 frame undertaking that took more than 1 hour per frame to render!) as a classic example of HPC use to accelerate film completion. With each of the frames, the light projection angle for use in reflective light intensity and shadow casting was computed for every pixel in each of the frames. At an average rate of more than one hour per frame for this calculation, HPC became an immediate solution for parallelism and time reduction for this segment of the product development process. Being able to execute these computations in parallel also provided time for review, edit and iteration for improved quality. What a great example of accelerating time to market and increasing the quality of the product through iterative processes!
For those at SC10 in New Orleans this week, I cannot encourage you enough to check out the innovation and advances coming from Microsoft’s technical computing group. I think you will be amazed. A special thanks to Bill Hamilton for sharing his time and perspectives and for allowing me to pass some of them along on the Syncsort blog. Stay tuned for more posts from my conversation with him and the interesting things he shared.