Posts Tagged Azure

Supercomputing for Everyone

IT shops increasingly are being drawn into high performance computing, but this is not the supercomputing of the past in which research and scientific-oriented organizations deployed massively parallel hardware presided over by armies of technocrats and computer geeks.  Supercomputing, with its ability to grapple with the most complex problems and extremely large volumes of data fast, is no longer only for large organizations in scientific and technical fields.

You only have to be unable to run a Monte Carlo simulation or two before you think a supercomputer might not be a bad thing if you could only get the use of one. The latest generation of high performance computing (HPC) systems promises to put supercomputing capabilities in the hands of even midsize and non-technical organizations. And the cloud adds the ability to harness massive numbers of processors and apply them to a single task.

The big consulting firms already are trying to capitalize on the trend. For example Accenture/Avanade is partnering with Microsoft to help deliver a wide range of advanced capabilities through Microsoft’s Azure cloud.  Similarly, Capgemini clearly has been brushing up on supercomputing trends for the future

Not coincidentally, the technology to perform HPC-style computing now is coming within the reach of conventional businesses with regular IT organizations. HPC is being delivered through compute clusters, compute grids, and increasingly via the cloud. And the compute clusters or grids can be nothing more than loosely connected Windows servers, not much different from the machines running throughout the organization.

The driver for this new-found interest in HPC is not a new mission to Mars or a sudden race to capitalize on the discovery of the Higgs boson. Behind the interest in HPC is data analytics, especially analytics of Big Data, preferably in near real time.  This requires the ability to capture, sort, filter, and correlate massive volumes of data to find worthwhile business insights.

Long-time HPC players like IBM, HP, SGI, and Dell are revamping their offerings for this new take on HPC. They are being joined by a new breed of compute-intensive analytics driven, cloud-based HPC players including Amazon’s Cluster Compute Instances, Appistry, and Microsoft’s Project Daytona, beyond whatever it does with Azure.

Not surprising, IBM has taken the lead in bringing what it now calls technical computing solutions within reach by making it complete, affordable, easy to deploy, and sufficiently scalable to accommodate workload growth and business expansion. It also aims to simplify administration through intuitive management tools that free companies to focus on business goals, not high-performance computing. In the process, it has ditched the HPC label—too geeky.

IBM is doing this mainly by bringing Platform Computing, a recent acquisition, to the HPC party. These include Platform LSF and Platform Symphony to enable up to 100% server utilization, Platform Cluster Manager, System x iDataPlex, and System Storage DCS3700 for parallel file management storage plus offerings for Big Data and cloud computing.  Previously iDataPlex was IBM’s main HPC offering.

With these platforms almost any organization can attack the same complex, multi-dimensional analytic problems that took way too long or were not even feasible with the usual corporate systems. The new generation of HPC can still handle compute-intensive supercomputing workloads, but they also can handle heavy analytic workloads and Big Data processing fast.

And they do it in ways that don’t require big investments in more technology and or the need to recruit a cadre of hardcore compute geeks. Where once supercomputing focused primarily on delivering megaflops (millions of floating point operations per second), petaflops, or even exaflops, now companies are looking to leverage affordable technical computing tools for problems that are less complicated than, say, intergalactic navigation yet still deliver important business results .

Initially HPC or supercomputing was considered the realm of large government research being conducted by secretive agencies and esoteric think tanks. Today, HPC is poised to go mainstream. Now companies in financial services, media, telecommunication, and life sciences are adopting HPC for modeling, simulations, and predictive analyses of various types. Financial services firms, for example, want real time analytics to deliver improved risk management, faster and more accurate credit valuation assessments, multi-dimensional pricing, and actuarial analyses.

While some of the work still has a distinct scientific flavor, like next-generation genomics or 3D computer modeling, others HPC activities seem like conventional business application processing. These include financial data analysis, real-time CRM, social sentiment analysis, data mining of unstructured data, and retail merchandising analysis and planning.

The role of IT will revolve around working with the business managers to identify the need and build the business case. Then IT assembles the technology from a range of off-the-shelf choices and captures and manages the data.  Welcome to the world of supercomputing for everyone.

, , , , , , , , , , , ,

Leave a comment

Follow

Get every new post delivered to your Inbox.

Join 402 other followers