Posts Tagged NetApp

Where Have All the Enterprise IT Hardware Vendors Gone?

Remember that song asking where all the flowers had gone? In a few years you might be asking the same of many of today’s enterprise hardware vendors.  The answer is important as you plan your data center 3-5 years out.  Where will you get your servers from and at what cost? Will you even need servers in your data center?  And what will they look like, maybe massive collections of ARM processors?

As reported in The Register (Amazon cloud threatens the entire IT ecosystem): Amazon’s cloud poses a major threat to most of the traditional IT ecosystem, a team of 25 Morgan Stanley analysts write in a report, Amazon Web Services: Making Waves in the IT Pond, that was released recently. The Morgan Stanley researchers cite Brocade, NetApp, QLogic, EMC and VMware as facing the greatest challenges from the growth of AWS. The threat takes the form of AWS’s exceeding low cost per virtual machine instance.

Beyond the price threat, the vendors are scrambling to respond to the challenges of cloud, mobile, and big data/analytics. Even Intel, the leading chip maker, just introduced the 4th generation Intel® Core™ processor family to address these challenges.  The new chip promises optimized experiences personalized for end-users’ specific needs and offers double the battery life and breakthrough graphics targeted to new low cost devices such as mobile tablets and all-in-one systems.

The Wall Street Journal online covered related ground from a different perspective when it wrote: PC makers unveiled a range of unconventional devices on the eve of Asia’s biggest computer trade show as they seek to revive (the) flagging industry and stay relevant amid stiff competition. Driven by the cloud and the explosion of mobile devices in a variety of forms the enterprise IT industry doesn’t seem to know what the next device should even be.

Readers once chastised this blogger for suggesting that their next PC might be a mobile phone. Then came smartphones, quickly followed by tablets. Today PC sales are dropping fast, according to IDC.

The next rev of your data center may be based on ARM processors (tiny, stingy with power, cheap, cool, and remarkably fast), essentially mobile phone chips. They could be ganged together in large quantities to deliver mainframe-like power, scalability, and reliability at a fraction of the cost.

IBM has shifted its focus and is targeting cloud computing, mobile, and big data/analytics, even directing its acquisitions toward these areas as witnessed by yesterday’s SoftLayer acquisition. HP, Oracle, most of the other vendors are pursuing variations of the same strategy.  Oracle, for example, acquired Tekelec, a smart device signaling company.

But as the Morgan Stanley analysts noted, it really is Amazon using its cloud scale to savage the traditional enterprise IT vendor hardware strategies and it is no secret why:

  • No upfront investment
  • Pay for Only What You Use (with a caveat or two)
  • Price Transparency
  • Faster Time to Market
  • Near-infinite Scalability and Global Reach

And the more AWS grows, the more its prices drop due to the efficiency of cloud scaling.  It is not clear how the enterprise IT vendors will respond.

What will your management say when they get a whiff of AWS pricing. An extra large, high memory SQL Server database instance lists for $0.74 per hour (check the fine print). What does your Oracle database cost you per hour running on your on-premise enterprise server? That’s what the traditional enterprise IT vendors are facing.

Advertisements

, , , , , , , , , , , , , , , , , , , ,

Leave a comment

Can Flash Replace Hard Disk for Enterprise Storage?

Earlier this month IBM announced a strategic initiative, the IBM FlashSystem, to drive Flash technology deeper into the enterprise. The IBM FlashSystem is a line of all-Flash storage appliances based on technology IBM acquired from Texas Memory Systems.

IBM’s intent over time is to replace hard disk drive (HDD) for enterprise storage with flash. Flash can speed the response of servers and storage systems to data requests from milliseconds to microseconds – an order of magnitude improvement. And because it is all electronic—nothing mechanical involved—and being delivered cost-efficiently at even petabyte scale, it can remake data center economics, especially for transaction-intensive and IOPS-intensive situations.

For example, the IBM FlashSystem 820 is the size of a pizza box but 20x faster than spinning hard drives and can store up to 24 TB of data. The entry-level IBM FlashSystem, with an approximate street price for an entry 820 (10 TB usable RAID 5) runs about $150K (or $15 per gigabyte).  At the high end, you can assemble a 1 PB FlashSystem that fits in one rack and delivers 22 million IOs per second (IOPS). You would need 630 racks of high capacity hard disk drives or 315 racks of performance optimized disk to generate an equal amount of IOPS.

For decades storage economics has been driven by the falling cost per unit of storage, and storage users have benefited from a remarkable ride down the cost curve thanks to Moore’s law. The cost per gigabyte for hard disk drive (HDD) has dropped steadily, year after year. You now can buy a slow, USB-connected 1TB consumer-grade disk drive for under $89!

With low cost per gigabyte storage, storage managers could buy cheap gigabytes, which enabled backup to disk and long-term disk archiving. Yes, tape is even cheaper on a cost per gigabyte basis but it is slow and cumbersome and prone to failure. Today HDD rules.

Silicon-based memory, however, has been riding the Moore’s Law cost slope too. In the last decade memory has emerged as a new storage media through memory-based storage in the form of RAM, DRAM, cache, flash, and solid state disk (SSD) technology. Prohibitively expensive to use for mass storage initially, the magic of Moore’s law combined with other technical advances and mass market efficiencies have made flash something to think about seriously for enterprise production storage.

The IBM FlashSystem changes data center economics. One cloud provider reported deploying 5TB in 3.5 inches of rack space compared to deploying 1300 hard disks to achieve 400k IOPS and it did so at one-tenth the cost.  Overall, Wikibon reports an all Flash approach will lower total system costs by 30%; that’s $4.9m for all flash compared to $7.1m for hard disk.  Specifically, it reduced software license costs 38%, required 17% few servers, and lowered environmental costs by 74% and operational support costs by 35%.  At the same time it boosted storage utilization by 50% while reducing maintenance and simplifying management with corresponding labor savings. Combine flash with compression, deduplication, and thin provisioning and the economics look even better.

For data center managers, this runs counter to everything they learned about the cost of storage. Traditional storage economics starts with the cost of hard disk storage being substantially less than the cost of SSD or Flash on a $/GB basis. Organizations could justify SSD, only by using it in small amounts to tap its sizeable cost/IOPS advantage for IOPS-intensive workloads.

Any HDD price/performance advantage is coming to an end. As reported in PC World, Steve Mills, IBM Senior Vice President noted:  Right now, generic hard drives cost about $2 per gigabyte. An enterprise hard drive will cost about $4 per gigabyte, and a high-performance hard drive will run about $6 per gigabyte. If an organization stripes its data across more disks for better performance, the cost goes up to about $10 per gigabyte. In some cases, where performance is critical, hard-drive costs can skyrocket to $30 or $50 per gigabyte.

From a full systems perspective (TCO for storage) Flash looks increasingly competitive. Said Ambuj Goyal, General Manager, Systems Storage, IBM Systems & Technology Group: “The economics and performance of Flash are at a point where the technology can have a revolutionary impact on enterprises, especially for transaction-intensive applications.” But this actually goes beyond just transactions. Also look at big data analytics workloads, technical computing, and any other IOPS-intensive work.

Almost every major enterprise storage vendor—EMC, NetApp, HP, Dell, Oracle/Sun—is adding SSD to their storage offerings. It is time to start rethinking your view of storage economics when flash can replace HDD and deliver better performance, utilization, and reliability even while reducing server software licensing costs and energy bills.

, , , , , , , , , , , , ,

Leave a comment

New Products Reduce Soaring Storage Costs

The latest EMC-sponsored IDC Digital Universe study projects that the digital universe will reach 40 zettabytes (ZB) by 2020, a 50-fold growth from the beginning of 2010!! Do you wonder why your storage budget keeps increasing? And the amount of data that requires protection—backup on some sort—is growing faster than the digital universe itself.  This clearly is not good for the organization’s storage budget.

Worse yet, from a budget standpoint, the investment on IT hardware, software, services, telecommunications and staff that could be considered the infrastructure of the digital universe will grow by 40% between 2012 and 2020. Investment in storage management, security, big data, and cloud computing will grow considerably faster.

Last July BottomlineIT partially addressed this issue with a piece of reducing your storage debt, here. Recent products from leading storage players promise to help you do it more easily.

Let’s start with EMC, whose most recent storage offering is the VMAX 40K Enterprise Storage System. Enterprise-class, it promises to deliver up to triple the performance and more than twice the usable capacity of any other offering in the Industry, at least that was the case seven months ago. But things change fast.

With the VMAX comes an enhanced storage tool that simplifies and streamlines storage management, enabling fewer administrators to handle more storage. EMC also brings a revamped storage tiering tool, making it easier to move data to less costly and lower performing storage when appropriate. This allows you to conserve your most costly storage for the data most urgently requiring it.

HP, which has been struggling in general through a number of self-inflicted wounds, continues to offer robust storage products. Recognizing that today’s storage challenges—vastly more data, different types of data, and more and different needs for the data—require new approaches HP revamped its Converged Storage architecture. According to an Evaluator Group study many organizations only use 30% of their physical disk capacity, effectively wasting the rest while forcing their admins to wrestle with multiple disparate storage products.

The newest HP storage products address this issue for midsize companies. They include the HP 3PAR StoreServ7000, which offers large enterprise-class storage availability and quality-of-service features at a midrange price point.  HP StoreAll, a scalable platform for object and file data access that provides a simplified environment for big data retention and cloud storage while reducing the need for additional administrators or hardware.  Finally, it introduced the HP StoreAll Express Query, a special data appliance that allows organizations to conduct search queries orders of magnitude faster than previous file system search methods. This expedites informed decision-making based on the most current data.

IBM revamped its storage line too for the same reasons.  Its sleekest offering, especially for midsize companies, is the Storwize V7000 Unified, which handles block and file storage.  It also comes as a blade for IBM’s hybrid (mixed platforms) PureSystems line, the Storwize Flex V7000. Either way it includes IBM’s Real-Time Compression (RtC).

RtC alone can save considerable money by reducing the amount of storage capacity an organization needs to buy, by delaying the need to acquire more storage as the business grows, and by speeding performance of storage-related functions. While other vendors offer compression, none can do what RtC does; it compresses active (production) data and with no impact on application performance. This is an unmatched and valuable achievement.

On top of that the V7000 applies built-in expertise to simplify storage management. It enables an administrator who is not skilled in storage to perform almost all storage tasks quickly, easily, and efficiently. Fewer lesser-skilled administrators can handle increasingly complex storage workloads and perform sophisticated storage tasks flawlessly.  This substantially reduces the large labor cost associated with storage.

NetApp also is addressing the same storage issues for midsize companies through its NetApp FAS3200 Series. With a new processor and memory architecture it promises up to 80% more performance, 100% more capacity, non-disruptive operations, and industry-leading storage efficiency.

Data keeps growing, and you can’t NOT store it. New storage products enable you to maximize storage utilization, optimize the business value from data, and minimize labor costs.

, , , , , , , , ,

Leave a comment

Speed Time to Big Data with Appliances

Hadoop will be coming to enterprise data centers soon as the big data bandwagon picks up stream. Speed of deployment is crucial. How fast can you deploy Hadoop and deliver business value?

Big data refers to running analytics against large volumes of unstructured data of all sorts to get closer to the customer, combat fraud, mine new opportunities, and more. Published reports have companies spending $4.3 billion on big data technologies by the end of 2012. But big data begets more big data, triggering even more spending, estimated by Gartner to hit $34 billion for 2013 and over a 5-year period to reach as much as $232 billion.

Most enterprises deploy Hadoop on large farms of commodity Intel servers. But that doesn’t have to be the case. Any server capable of running Java and Linux can handle Hadoop. The mainframe, for instance, should make an ideal Hadoop host because of the sheer scalability of the machine. Same with IBM’s Power line or the big servers from Oracle/Sun and HP, including HP’s new top of the line Itanium server.

At its core, Hadoop is a Linux-based Java program and is usually deployed on x86-based systems. The Hadoop community has effectively disguised Hadoop to speed adoption by the mainstream IT community through tools like SQOOP, a tool for importing data from relational databases into Hadoop, and Hive, which enables you to query the data using a SQL-like language called HiveQL. Pig is a high-level platform for creating the MapReduce programs used with Hadoop. So any competent data center IT group could embark on Hadoop big data initiatives.

Big data analytics, however, doesn’t even require Hadoop.  Alternatives like Hortonworks Data Platform (HDP), MapR, IBM GPFS-SNC (Shared Nothing Cluster), Lustre, HPCC Systems, Backtype Storm (acquired by Twitter), and three from Microsoft (Azure Table, Project Daytona, LINQ) all promise big data analytics capabilities.

Appliances are shaping up as an increasingly popular way to get big data deployed fast. Appliances trade flexibility for speed and ease of deployment. By packaging hardware and software pre-configured and integrated they make it ready to run right out of the box. The appliance typically comes with built-in analytics software that effectively masks big data complexity.

For enterprise data centers, the three primary big data appliance players:

  • IBM—PureData, the newest member of its PureSystems family of expert systems. PureData is delivered as an appliance that promises to let organizations quickly analyze petabytes of data and then intelligently apply those insights in addressing business issues across their organization. The machines come as three workload-specific models optimized either for transactional, operational, and big data analytics.
  • Oracle—the Oracle Big Data Appliance is an engineered system optimized for acquiring, organizing, and loading unstructured data into Oracle Database 11g. It combines optimized hardware components with new software to deliver a big data solution. It incorporates Cloudera’s Apache Hadoop with Cloudera Manager. A set of connectors also are available to help with the integration of data.
  • EMC—the Greenplum modular data computing appliance includes Greenplum Database for structured data, Greenplum HD for unstructured data, and DIA Modules for Greenplum partner applications such as business intelligence (BI) and extract, transform, and load (ETL) applications configured into one appliance cluster via a high-speed, high-performance, low-latency interconnect.

 And there are more. HP offers HP AppSystem for Apache Hadoop, an enterprise-ready appliance that simplifies and speeds deployment while optimizing performance and analysis of extreme scale-out Hadoop workloads. NetApp offers an enterprise-class Hadoop appliance that may be the best bargain given NetApp’s inclusive storage pricing approach.

As much as enterprise data centers loathe deploying appliances, if you are under pressure to get on the big data bandwagon fast and start showing business value almost immediately appliances will be your best bet. And there are plenty to choose from.

, , , , , , , , , , , , ,

Leave a comment

Reduce Your Storage Technology Debt

The idea of application debt or technology debt is gaining currency. At Capgemini, technology debt is directly related to quality. The consulting firm defines is as “the cost of fixing application quality problems that, if left unfixed, put the business at serious risk.” To Capgemini not every system clunker adds to the technical debt, only those that are highly likely to cause business disruption. In short, the firm does not include all problems, just the serious ones.

Accenture’s Adam Burden, executive director of the firm’s Cloud Application and Platform Service and a keynoter at Red Hat’s annual user conference in Boston June brought up technology debt too. You can watch the video of his presentation here.

Does the same idea apply to storage? You could define storage debt as storage technologies, designs, and processes that over time hinder the efficient delivery of storage services to the point where it impacts business performance. Then, using this definition, a poorly architected storage infrastructure no matter how well it solved the initial problem may create a storage debt that eventually will have to be repaid or service levels would suffer.

Another thing with technology debt; there is no completely free lunch. Every IT decision, including storage decisions, even the good ones, eventually adds to the technology debt at some level. The goal is to identify those storage decisions that incur the least debt and avoid the ones that incur the most.

For example, getting locked into a vendor or a technology that has no future obviously will create a serious storage debt. But what about a decision to use SSD to boost IOPS as opposed to a decision to throw more spindles at the IOPS challenge? Same with B2D. Does it create more or less storage debt than tape backup?

Something else to consider:  does storage debt result only from storage hardware, software, and firmware decisions or should you take into account the skills and labor involved? You might gradually realize you have locked yourself into a staff with obsolete skills.  Then what; fire them all, undertake widespread retraining, quit and leave it for the next manager?

And then there is the cloud. The cloud today must be factored into every technology discussion. How does storage in the cloud impact storage debt? It certainly complicates the calculus.

It’s easy to accumulate storage debt but it’s also possible to lower your organization’s storage debt. Here are some possibilities:

  • Aim for simple storage architectures
  • Standardize on a small set of proven products from solid vendors
  • Virtualize storage
  • Maximize the use of tools with a GUI to simplify management
  • Standardize on products certified in compliance with SMI-S for broader interoperability
  • Selectively leverage cloud storage
  • Use archiving, deduplication, thin provisioning  to minimize the amount of data you store

Sticking with one or two of the leading storage vendors–IBM, EMC, HP, Dell, NetApp, Symantec, Hitachi, StorageTek–is a generally safe bet, but it too can add to your storage debt.

You’re just not going to eliminate the storage debt, especially not in an environment where the demand for more and different storage is increasing every year.  The best you can strive for is to minimize the storage debt and restrain its growth.

, , , , , , , , , ,

1 Comment

HP and Dell Lead Latest Rush to Cloud Services

HP last week made its first public cloud services available as a public beta. This advances the company’s Converged Cloud portfolio as the company delivers an open source-based public cloud infrastructure designed to enable developers, independent software vendors (ISVs) and enterprises of all sizes to build the next generation of web applications.

These services, HP Cloud Compute, Storage, and HP Cloud Content Delivery Network, now will be offered through a pay-as-you-go model. Designed with OpenStack technology, the open-sourced-based architecture avoids vendor lock-in, improves developer productivity, features a full stack of easy-to-use tools for faster time to code, provides access to a rich partner ecosystem, and is backed by personalized customer support.

Last week Dell also joined the cloud services rush with an SAP cloud services offering. Although Dell has been in the services business at least since its acquisition of Perot Systems a few years back services for SAP and the cloud, indeed are new, explained Burk Buechler, Dell’s service portfolio director.

Dell offers two cloud SAP services. The first is the Dell Cloud Starter Kit for SAP Solutions, which helps organizations get started on their cloud journey quickly by providing customers with 60-day access to Dell’s secure cloud environment with a compute power equivalent to 8,000 SAP Application Performance Standard (SAPS) unit of measure and is coupled with Dell’s consulting, application, and infrastructure services in support of SAP solutions.

The second is the Dell Cloud Development Kit for SAP Solutions, which provides access to 32,000 SAPS of virtual computing capacity to deploy development environments for more advanced customers who need a rich development landscape for running SAP applications. This provides a comprehensive developer environment with additional capabilities for application modernization and features industry-leading Dell Boomi technology for rapid cross-application integration.

Of the latest two initiatives, HP’s is the larger. Nearly 40 companies have announced their support for HP Cloud Services, from Platform-as-a-Service (PaaS) partners to storage, management and database providers. The rich partner ecosystem provides customers with rapid access to an expansive suite of integrated cloud solutions that offer new ways to become more agile and efficient. The partner network also provides a set of tools, best practices and support to help maximize productivity on the cloud. This ecosystem of partners is a step along the path to an HP Cloud Services Marketplace, where customers will be able to access HP Cloud Services and partner solutions through a single account.

Of course, there are many other players in this market. IBM staked out cloud services early with a variety of IBM SmartCloud offerings. Other major players include Oracle., Rackspace, Amazon’s Elastic Compute Cloud (EC2), EMC, Red Hat, Cisco, NetApp, and Microsoft.  It is probably safe to say that eventually every major IT vendor will offer cloud services capabilities. And those that don’t will have partnerships and alliances with those who do.

Going forward every organization will include a cloud component as some part of their IT environment. For some, it will represent a major component; for others cloud usage will vary as business and IT needs change. There will be no shortage of options, something to fit every need.

, , , , , , , , , ,

Leave a comment

Low-Cost Fast Path to Private Cloud

The private cloud market—built around a set of virtualized IT resources behind the organization’s firewall—is growing rapidly. Private cloud vendors have been citing the latest Forrester prediction of the private cloud market to growth to more than $15 billion in 2020. Looking at a closer horizon, IDC estimates the private cloud market will grow to $5.8 billion by 2015.

 The appeal of the private cloud comes from its residing on-premise and its ability to leverage existing IT resources wherever possible. Most importantly, the private cloud addresses the concerns of business executives about cloud security and control.

The promise of private clouds is straightforward:  more flexibility and agility from their systems, lower total costs, higher utilization of the hardware, and better utilization of the IT staff. In short organizations want all the benefits of the public cloud computing along with the security of keeping it private behind enterprise firewall.

Private clouds can do this by delivering IT as a service and freeing up IT manpower through self-service automation. The private cloud sounds simple. They don’t, however, come that easily. They require sophisticated virtualization and automation.  “Up-front costs are real, and choosing the right vendor to manage or deploy an environment is equally important,” says senior IDC analyst Katie Broderick.

IBM, however, may change the private cloud financial equation with its newest SmartCloud Entry offering based on IBM System x (x86 servers) and VMware.  The starting price is surprisingly low, under $60,000.

The IBM SmartCloud Entry starts with a flexible, modular design that can be installed quickly. It also can handle integrated management; automated provisioning through a service request catalog, approvals, metering, and billing; and do it all through a consolidated management console, a single pane of glass. The result: the delivery of standardized IT services on the fly and at a lower cost through automation. A business person, according to IBM, can self-provision services through SmartCloud Entry in four mouse clicks,.  something even a VP can handle.

The prerequisite for any private cloud is virtualized systems.  Start by consolidating and virtualizing servers, storage, and networking to reduce operating and capital expenses and streamline systems management. Virtualization is essential to achieve the flexibility and efficiency organizations want from their private cloud. They must virtualize as the first step in IBM’s SmartCloud Entry or any other private cloud.

From there you improve speed and business agility through SmartCloud Entry capabilities like automated service deployment, portal-based self-service provisioning, and simplified administration.  In short you create master images of the desired software, convert the images for use with inexpensive tools like the open source KVM hypervisor, and track the images to ensure compliance and minimize security risks. Finally you can gain efficiency by reducing both the number of images and the storage required for them. From there just deploy the software images on request through end user self-service combined with virtual machine isolation capabilities and project-level user access controls for security.

By doing this—deploying and maintaining the application images, delegating and automating the provisioning, standardizing deployment, and simplifying administration—the organization can cut the time to deliver IT capabilities through a private cloud from months to 2-3 days, actually to just hours in some cases. This is what enables business agility—the ability to respond to changes fast—with reduced costs through a more efficient operation.

At $60k the IBM x86 SmartCloud Entry offering is a good place to start although IBM has private cloud offerings for Linux and Power Systems as well. But all major IT vendors are targeting private clouds though few can deliver as much of the stack as IBM. Microsoft offers a number of private cloud solutions here. HP provides a private cloud solution for Oracle, here, while Oracle has an advanced cluster file system for private cloud storage here.  NetApp, primarily a storage vendor, has partnered with others to deliver a variety of NetApp private cloud solutions for VMware, Hyper-V, SAP, and more.

, , , , , ,

Leave a comment