Archive for January, 2013

The Use Case and Business Case for SSD

PwC is bullish on mobile, projecting a 35% CAGR through 2015, driven in large part by solid state disk (SSD) technology.  The PwC researchers now see it extending beyond mobile to other types of computing.

Deloitte takes SSD enthusiasm even further: By the end of 2012, SSD will likely store data in 90% of mobile devices (smartphones, tablets, MP3 players), up from just 20% in 2006, the consulting firm predicts. More surprising, Deloitte expects up to 15% of laptops and netbooks to rely on SSDs, four times more than in 2010. Even in the data center, SSDs could rise to 10%.

The growing penetration of SSD into enterprise data centers presents a  challenge to enterprise data center trying to identify the best use cases figure out how it fits into their storage strategy. Even with the price-performance improvements, however, SSD remains significantly more expensive than hard disk drive (HDD) storage on a cost-per-gigabyte basis.  Any adoption of SSD will require building a compelling business case for SSD.

“The increasing use of flash in enterprise solutions, explosive growth of mobile client devices, and lower SSD pricing is creating a perfect storm for increased SSD shipments and revenue over our forecast,” according to IDC’s 2012 market report.  The researchers expect SSD shipments to increase at a compound annual growth rate (CAGR) of 51.5% from 2010 to 2015.

The initial enterprise use cases revolve around workloads that require extremely low latency and workloads that demand fast performance or high I/O throughput.  It also can be useful in data centers scrambling to reduce energy consumption or are cramped for rack space. A small amount of SSD can replace a large amount of HDD in a rack.

For example, Penn State turned to SSD to speed up nightly backup. Its solution: a flash array from Texas Memory Systems (acquired by IBM) instead of increasing the number of HDD spindles. It paid off in a 6x improvement in nightly backup performance with two 1U flash arrays replacing 200 15k disks and reducing power consumption by 90% in the process.

With SSD prices dropping below $1 per gigabyte that still leaves SSD considerably more expensive than HDD on a cost/gigabyte basis.  Cost/gigabyte, however, is not the only cost metric important to enterprise data centers.

SSD has at least two cost metrics it brings to the party that you can use to build the business case for SSD:  dramatically lower cost in terms of I/O performance. This is particularly apparent when you look the I/O cost per second (IOPS).  SSD also requires considerably less data center energy and space.

The way you boost IOPS performance with HDD is to aggregate hundreds, or more likely, thousands of the fastest spinning HDDs to boost IOPS performance. Even at the low HDD cost/gigabyte, the cost adds up.

Also, few data centers these days have cheap rack space to spare so more rack space will need to be found, adding to the data center real estate cost. Finally, all those spinning disks consume electricity, raising energy costs so add to the cost of HDD the cost of data center space and energy. On a cost/IOPS basis of a few thousand HDDs, which is what it would take to generate a comparable level of IOPS to a just a few SSDs, the HDD approach is no IOPS bargain. SSD storage, by the way, uses 90% or less energy than HDD. Figured on a cost/IOPS basis, SSD energy consumption is negligible. This is the foundation of your business case.

This is not to say that SSD has no drawbacks. SSD life expectancy can be a concern. In short, SSDs can wear out.

The SSD industry has come up with several solutions to the wear out problem. The most widely adopted is load-leveling. Through load-leveling, writes are distributed across the cells to minimize wear of any cell. Through load-leveling organizations can effectively stretch the useful life of SSD by years.

It is unlikely that enterprise data centers will completely replace HDD storage with SSD. In the meantime, polish your business case for SSD. You’ll want it sooner or later.


, , , , , , , , , , , , ,

Leave a comment

Achieving the Private Cloud Business Payoff Fast

Nationwide Insurance eliminated both capital and operational expenditures through a private cloud and expects to save about $15 million over three years. In addition, it expects the more compact and efficient private cloud landscape to mean lower costs in the future.

The City of Honolulu turned to a private cloud and reduced application deployment time from one week to only hours. It also reduced the licensing cost of one database by 68%. Better still; a new property tax appraisal system resulted in $1.4 million of increased tax revenue in just three months.

The private cloud market, especially among larger enterprises, is strong and is expected to show a CAGR of 21.5% through 2015, according to research distributed by Another report from Renub Research quotes analysts saving security is a big concern for enterprises that may be considering the use of public cloud. For such organizations, the private cloud represents an alternative with a tighter security model that would enable their IT managers to control the building, deployment and management of those privately owned, internal clouds.

Nationwide and Honolulu each built their private clouds on the IBM mainframe. From its introduction last August, IBM has aimed the zEC12 at cloud use cases, especially private clouds. The zEC12’s massive virtualization capabilities make it possible to handle private cloud environments consisting of thousands of distributed systems running Linux on zEC12.

One zEC12, notes IBM, can encompass the capacity of an entire multi-platform data center in a single system. The newest z also enables organizations to run conventional IT workloads and private cloud applications on one system.  Furthermore, if you are looking at a zEC12 coupled with the zBX (extension cabinet) you can have a multi-platform private cloud running Linux, Windows, and AIX workloads.  On a somewhat smaller scale, you can build a multi-platform private cloud using the IBM PureSystems machines.

Organizations everywhere are adopting private clouds.  The Open Data Center Alliance reports faster private cloud adoption than originally predicted. Over half its survey respondents will be running more than 40% of their IT operations in private clouds by 2015.

Mainframes make a particularly good private clouds choice.  Nationwide, the insurance company, initially planned to consolidate 3000 distributed servers to Linux virtual servers running on several z mainframes, creating a multi-platform private mainframe cloud optimized for its different workloads. The goal was to improve efficiency.

The key benefit: higher utilization and better economies of scale, effectively making the mainframes into a unified private cloud—a single set of resources, managed with the same tools but optimized for a variety of workloads. This eliminated both capital and operational expenditures and is expected to save about $15 million over three years. The more compact and efficient zEnterprise landscape also means low costs in the future too. Specifically, Nationwide is realizing an 80% reduction in power, cooling and floor space despite an application workload that is growing 30% annually, and practically all of it handled through the provisioning of new virtual servers on the existing mainframe footprint.

The City and County of Honolulu needed to increase government transparency by providing useful, timely data to its citizens. The goal was to boost citizen involvement, improve delivery of services, and increase the efficiency of city operations.

Honolulu built its cloud using an IFL engine running Linux on the city’s z10 EC machine. Between Linux and IBM z/VM the city created a customized cloud environment. This provided a scalable self-service platform on which city employees could develop open source applications, and it empowered the general public to create and deploy citizen-centric applications.

The results: reduction in application deployment time from one week to only hours and 68% lower licensing costs for one database. The resulting new property tax appraisal system increased tax revenue by $1.4 million in just three months.

You can do a similar multi-platform private cloud with IBM PureSystems. In either case the machines arrive ready for private cloud computing. Or else you can piece together x86 servers and components and do it yourself, which entails a lot more work, time, and risk.

, , , , , , , , , , , ,

Leave a comment

New Products Reduce Soaring Storage Costs

The latest EMC-sponsored IDC Digital Universe study projects that the digital universe will reach 40 zettabytes (ZB) by 2020, a 50-fold growth from the beginning of 2010!! Do you wonder why your storage budget keeps increasing? And the amount of data that requires protection—backup on some sort—is growing faster than the digital universe itself.  This clearly is not good for the organization’s storage budget.

Worse yet, from a budget standpoint, the investment on IT hardware, software, services, telecommunications and staff that could be considered the infrastructure of the digital universe will grow by 40% between 2012 and 2020. Investment in storage management, security, big data, and cloud computing will grow considerably faster.

Last July BottomlineIT partially addressed this issue with a piece of reducing your storage debt, here. Recent products from leading storage players promise to help you do it more easily.

Let’s start with EMC, whose most recent storage offering is the VMAX 40K Enterprise Storage System. Enterprise-class, it promises to deliver up to triple the performance and more than twice the usable capacity of any other offering in the Industry, at least that was the case seven months ago. But things change fast.

With the VMAX comes an enhanced storage tool that simplifies and streamlines storage management, enabling fewer administrators to handle more storage. EMC also brings a revamped storage tiering tool, making it easier to move data to less costly and lower performing storage when appropriate. This allows you to conserve your most costly storage for the data most urgently requiring it.

HP, which has been struggling in general through a number of self-inflicted wounds, continues to offer robust storage products. Recognizing that today’s storage challenges—vastly more data, different types of data, and more and different needs for the data—require new approaches HP revamped its Converged Storage architecture. According to an Evaluator Group study many organizations only use 30% of their physical disk capacity, effectively wasting the rest while forcing their admins to wrestle with multiple disparate storage products.

The newest HP storage products address this issue for midsize companies. They include the HP 3PAR StoreServ7000, which offers large enterprise-class storage availability and quality-of-service features at a midrange price point.  HP StoreAll, a scalable platform for object and file data access that provides a simplified environment for big data retention and cloud storage while reducing the need for additional administrators or hardware.  Finally, it introduced the HP StoreAll Express Query, a special data appliance that allows organizations to conduct search queries orders of magnitude faster than previous file system search methods. This expedites informed decision-making based on the most current data.

IBM revamped its storage line too for the same reasons.  Its sleekest offering, especially for midsize companies, is the Storwize V7000 Unified, which handles block and file storage.  It also comes as a blade for IBM’s hybrid (mixed platforms) PureSystems line, the Storwize Flex V7000. Either way it includes IBM’s Real-Time Compression (RtC).

RtC alone can save considerable money by reducing the amount of storage capacity an organization needs to buy, by delaying the need to acquire more storage as the business grows, and by speeding performance of storage-related functions. While other vendors offer compression, none can do what RtC does; it compresses active (production) data and with no impact on application performance. This is an unmatched and valuable achievement.

On top of that the V7000 applies built-in expertise to simplify storage management. It enables an administrator who is not skilled in storage to perform almost all storage tasks quickly, easily, and efficiently. Fewer lesser-skilled administrators can handle increasingly complex storage workloads and perform sophisticated storage tasks flawlessly.  This substantially reduces the large labor cost associated with storage.

NetApp also is addressing the same storage issues for midsize companies through its NetApp FAS3200 Series. With a new processor and memory architecture it promises up to 80% more performance, 100% more capacity, non-disruptive operations, and industry-leading storage efficiency.

Data keeps growing, and you can’t NOT store it. New storage products enable you to maximize storage utilization, optimize the business value from data, and minimize labor costs.

, , , , , , , , ,

Leave a comment