Archive for July, 2012

Reduce Your Storage Technology Debt

The idea of application debt or technology debt is gaining currency. At Capgemini, technology debt is directly related to quality. The consulting firm defines is as “the cost of fixing application quality problems that, if left unfixed, put the business at serious risk.” To Capgemini not every system clunker adds to the technical debt, only those that are highly likely to cause business disruption. In short, the firm does not include all problems, just the serious ones.

Accenture’s Adam Burden, executive director of the firm’s Cloud Application and Platform Service and a keynoter at Red Hat’s annual user conference in Boston June brought up technology debt too. You can watch the video of his presentation here.

Does the same idea apply to storage? You could define storage debt as storage technologies, designs, and processes that over time hinder the efficient delivery of storage services to the point where it impacts business performance. Then, using this definition, a poorly architected storage infrastructure no matter how well it solved the initial problem may create a storage debt that eventually will have to be repaid or service levels would suffer.

Another thing with technology debt; there is no completely free lunch. Every IT decision, including storage decisions, even the good ones, eventually adds to the technology debt at some level. The goal is to identify those storage decisions that incur the least debt and avoid the ones that incur the most.

For example, getting locked into a vendor or a technology that has no future obviously will create a serious storage debt. But what about a decision to use SSD to boost IOPS as opposed to a decision to throw more spindles at the IOPS challenge? Same with B2D. Does it create more or less storage debt than tape backup?

Something else to consider:  does storage debt result only from storage hardware, software, and firmware decisions or should you take into account the skills and labor involved? You might gradually realize you have locked yourself into a staff with obsolete skills.  Then what; fire them all, undertake widespread retraining, quit and leave it for the next manager?

And then there is the cloud. The cloud today must be factored into every technology discussion. How does storage in the cloud impact storage debt? It certainly complicates the calculus.

It’s easy to accumulate storage debt but it’s also possible to lower your organization’s storage debt. Here are some possibilities:

  • Aim for simple storage architectures
  • Standardize on a small set of proven products from solid vendors
  • Virtualize storage
  • Maximize the use of tools with a GUI to simplify management
  • Standardize on products certified in compliance with SMI-S for broader interoperability
  • Selectively leverage cloud storage
  • Use archiving, deduplication, thin provisioning  to minimize the amount of data you store

Sticking with one or two of the leading storage vendors–IBM, EMC, HP, Dell, NetApp, Symantec, Hitachi, StorageTek–is a generally safe bet, but it too can add to your storage debt.

You’re just not going to eliminate the storage debt, especially not in an environment where the demand for more and different storage is increasing every year.  The best you can strive for is to minimize the storage debt and restrain its growth.


, , , , , , , , , ,

1 Comment

Supercomputing for Everyone

IT shops increasingly are being drawn into high performance computing, but this is not the supercomputing of the past in which research and scientific-oriented organizations deployed massively parallel hardware presided over by armies of technocrats and computer geeks.  Supercomputing, with its ability to grapple with the most complex problems and extremely large volumes of data fast, is no longer only for large organizations in scientific and technical fields.

You only have to be unable to run a Monte Carlo simulation or two before you think a supercomputer might not be a bad thing if you could only get the use of one. The latest generation of high performance computing (HPC) systems promises to put supercomputing capabilities in the hands of even midsize and non-technical organizations. And the cloud adds the ability to harness massive numbers of processors and apply them to a single task.

The big consulting firms already are trying to capitalize on the trend. For example Accenture/Avanade is partnering with Microsoft to help deliver a wide range of advanced capabilities through Microsoft’s Azure cloud.  Similarly, Capgemini clearly has been brushing up on supercomputing trends for the future

Not coincidentally, the technology to perform HPC-style computing now is coming within the reach of conventional businesses with regular IT organizations. HPC is being delivered through compute clusters, compute grids, and increasingly via the cloud. And the compute clusters or grids can be nothing more than loosely connected Windows servers, not much different from the machines running throughout the organization.

The driver for this new-found interest in HPC is not a new mission to Mars or a sudden race to capitalize on the discovery of the Higgs boson. Behind the interest in HPC is data analytics, especially analytics of Big Data, preferably in near real time.  This requires the ability to capture, sort, filter, and correlate massive volumes of data to find worthwhile business insights.

Long-time HPC players like IBM, HP, SGI, and Dell are revamping their offerings for this new take on HPC. They are being joined by a new breed of compute-intensive analytics driven, cloud-based HPC players including Amazon’s Cluster Compute Instances, Appistry, and Microsoft’s Project Daytona, beyond whatever it does with Azure.

Not surprising, IBM has taken the lead in bringing what it now calls technical computing solutions within reach by making it complete, affordable, easy to deploy, and sufficiently scalable to accommodate workload growth and business expansion. It also aims to simplify administration through intuitive management tools that free companies to focus on business goals, not high-performance computing. In the process, it has ditched the HPC label—too geeky.

IBM is doing this mainly by bringing Platform Computing, a recent acquisition, to the HPC party. These include Platform LSF and Platform Symphony to enable up to 100% server utilization, Platform Cluster Manager, System x iDataPlex, and System Storage DCS3700 for parallel file management storage plus offerings for Big Data and cloud computing.  Previously iDataPlex was IBM’s main HPC offering.

With these platforms almost any organization can attack the same complex, multi-dimensional analytic problems that took way too long or were not even feasible with the usual corporate systems. The new generation of HPC can still handle compute-intensive supercomputing workloads, but they also can handle heavy analytic workloads and Big Data processing fast.

And they do it in ways that don’t require big investments in more technology and or the need to recruit a cadre of hardcore compute geeks. Where once supercomputing focused primarily on delivering megaflops (millions of floating point operations per second), petaflops, or even exaflops, now companies are looking to leverage affordable technical computing tools for problems that are less complicated than, say, intergalactic navigation yet still deliver important business results .

Initially HPC or supercomputing was considered the realm of large government research being conducted by secretive agencies and esoteric think tanks. Today, HPC is poised to go mainstream. Now companies in financial services, media, telecommunication, and life sciences are adopting HPC for modeling, simulations, and predictive analyses of various types. Financial services firms, for example, want real time analytics to deliver improved risk management, faster and more accurate credit valuation assessments, multi-dimensional pricing, and actuarial analyses.

While some of the work still has a distinct scientific flavor, like next-generation genomics or 3D computer modeling, others HPC activities seem like conventional business application processing. These include financial data analysis, real-time CRM, social sentiment analysis, data mining of unstructured data, and retail merchandising analysis and planning.

The role of IT will revolve around working with the business managers to identify the need and build the business case. Then IT assembles the technology from a range of off-the-shelf choices and captures and manages the data.  Welcome to the world of supercomputing for everyone.

, , , , , , , , , , , ,

Leave a comment