Posts Tagged HP

Best TCO—System z vs. x86 vs. Public Cloud

IBM recently analyzed various likely customer workload scenarios and found that the System z as an enterprise Linux server could consistently beat x86 machines and even public cloud providers like AWS in terms of TCO.  The analysis was reasonably evenhanded although, like automobile mileage ratings, your actual results may vary.

This blogger has long contended that the z Enterprise Linux Server acquired under the deeply discounted IBM System z Solution Edition program could beat comparable x86 systems not only in terms of TCO but even TCA. Algar, a Brazilian telecom, acquired its initial zEnterprise Linux server to consolidate a slew of x86 systems and lay a foundation for scalable growth. It reports cutting data center costs by 70%. Nationwide Insurance, no newcomer to mainframe computing, used the zEnterprise to consolidate Linux servers, achieving $46 million in savings.

The point: the latest IBM TCO analyses confirm what IBM has been saying. TCO advantage, IBM found, switches to the z Enterprise Linux Server at around 200 virtual servers compared to the public cloud and a bit more VMs compared to x86 machines. View the IBM z TCO presentation here.

IBM further advanced its cause in the TCO/TCA battle with the recent introduction of the IBM Enterprise Cloud System. This is a factory-built and integrated system—processor, memory, network, IFLs, virtualization management, cloud management, hypervisor, disk orchestration, Linux OS—priced (discounted) as a single solution. IBM promises to deliver it in 45 days and have it production ready within hours of hitting the organization’s loading dock. Of course, it comes with the scalability, availability, security, manageability, etc. long associated with the z, and IBM reports it can scale to 6000 VMs. Not sure how this compares in price to a Solution Edition Enterprise Linux Server.

The IBM TCO analysis compared the public cloud, x86 cloud, and the Enterprise Cloud System in terms power and space, labor, software/middleware, and hardware costs when running 48 diverse (a range of low, medium, and high I/O) workloads. In general it found an advantage for the z Enterprise Cloud System of 34-73%.  The z cost considerably more in terms of hardware but it more than made up for it in terms of software, labor, and power. Overall, the TCO examined more than 30 cost variables, ranging from blade/IFL/memory/storage amounts to hypervisor/cloud management/middleware maintenance.

In terms of hardware, the z included the Enterprise Linux Server, storage, z/VM, and IBM Wave for z/VM. Software included WebSphere Application Server middleware, Cloud Management Suite for z, and Tivoli for z/VM. The x86 cloud included HP hardware with a hypervisor, WebSphere Application Server, SmartCloud Orchestrator, SmartCloud Monitoring, and Tivoli Storage Manager EE. Both analyses included labor to manage both hardware and VMs, power and space costs, and SUSE Linux.

The public cloud assumptions were a little different. Each workload was deployed as a separate instance. The pricing model was for reserved instances. Hardware costs were based on instances in east US region with SUSE, EBS volume, data in/out, support (enterprise), free and reserved tier discounts applied. Software costs included WebSphere Application Server ND (middleware) costs for instances. A labor cost was included for managing instance.

When IBM applied its analysis to 398 I/O-diverse workloads the results were similar, 49-75% lower cost with the Cloud System on z. Again, z hardware was considerably more costly than either x86 or the public cloud. But z software and labor was far less than the others. In terms of 3-year TCO, the cloud was the highest at $37 M, x86 came in at $18.3 M, and the Cloud on z cost $9.4 M. With 48 workloads, the z again came in with lowest TCO at $1 M compared to $1.6 M for x86 systems, and $3.9 M for the public cloud.

IBM tried to keep the assumptions equivalent across the platforms. If you make different software or middleware choices or a different mix of high-mid-low I/O workloads your results will be different but the rankings probably won’t change all that much.

Also, there still is time to register for IBM Edge2014 in Las Vegas. This blogger will be there hanging around the bloggers lounge when not attending sessions. Please join me there.

Follow Alan Radding/BottomlineIT on Twitter: @mainframeblog

, , , , , , , , , , , , , , , , , , , , , , , , ,

Leave a comment

Fueled by SMAC Tech M&A Activity to Heat Up

Corporate professional services firm BDO USA  polled approximately 100 executives of U.S. tech outfits for its 2014 Technology Outlook Survey and found them firm in the belief that mergers and acquisitions in tech would either stay at the same rate (40%) or increase over last year (43%). And this isn’t a recent phenomenon.

M&A has been widely adopted across a range of technology segments as not only the vehicle to drive growth but, more importantly, to remain at the leading edge in a rapidly changing business and technology environment that is being spurred by cloud and mobile computing. And fueling this M&A wave is SMAC (Social, Mobile, Analytics, Cloud).

SMAC appears to be triggering a scramble among large, established blue chip companies like IBM, EMC, HP, Oracle, and more to acquire almost any promising upstart out there. Their fear: becoming irrelevant, especially among the young, most highly sought demographics.  SMAC has become the code word (code acronym, anyway) for the future.

EMC, for example, has evolved from a leading storage infrastructure player to a broad-based technology giant driven by 70 acquisitions over the past 10 years. Since this past August IBM has been involved in a variety of acquisitions amounting to billions of dollars. These acquisitions touch on everything from mobile networks for big data analytics and mobile device management to cloud services integration.

Google, however, probably should be considered the poster child for technology M&A. According to published reports, Google has been acquiring, on average, more than one company per week since 2010. The giant search engine and services company’s biggest acquisition to date has been the purchase of Motorola Mobility, a mobile device (hardware) manufacturing company, for $12.5 billion. The company also purchased an Israeli startup Waze  in June 2013 for almost $1 billion.  Waze is a GPS-based application for mobile phones and has brought Google a strong position in the mobile phone navigation business, even besting Apple’s iPhone for navigation.

Top management has embraced SMAC-driven M&A as the fastest, easiest, and cheapest way to achieve strategic advantage through new capabilities and the talent that developed those capabilities. Sure, the companies could recruit and build those capabilities on their own but it could take years to bring a given feature to market that way and by then, in today’s fast moving competitive markets, the company would be doomed to forever playing catch up.

Even with the billion-dollar and multi-billion dollar price tags some of these upstarts are commanding strategic acquisitions like Waze, IBM’s SoftLayer, or EMC’s XtremeIO have the potential to be game changers. That’s the hope, of course. But it can be risky, although risk can be managed.

And the best way to manage SMAC merger risk is to have a flexible IT platform that can quickly absorb those acquisitions and integrate and share the information and, of course, a coherent strategy for leveraging the new acquisitions. What you need to avoid is ending up with a bunch of SMAC piece parts that don’t fit together.

, , , , , , , , , , , , , , , , , ,

Leave a comment

Where Have All the Enterprise IT Hardware Vendors Gone?

Remember that song asking where all the flowers had gone? In a few years you might be asking the same of many of today’s enterprise hardware vendors.  The answer is important as you plan your data center 3-5 years out.  Where will you get your servers from and at what cost? Will you even need servers in your data center?  And what will they look like, maybe massive collections of ARM processors?

As reported in The Register (Amazon cloud threatens the entire IT ecosystem): Amazon’s cloud poses a major threat to most of the traditional IT ecosystem, a team of 25 Morgan Stanley analysts write in a report, Amazon Web Services: Making Waves in the IT Pond, that was released recently. The Morgan Stanley researchers cite Brocade, NetApp, QLogic, EMC and VMware as facing the greatest challenges from the growth of AWS. The threat takes the form of AWS’s exceeding low cost per virtual machine instance.

Beyond the price threat, the vendors are scrambling to respond to the challenges of cloud, mobile, and big data/analytics. Even Intel, the leading chip maker, just introduced the 4th generation Intel® Core™ processor family to address these challenges.  The new chip promises optimized experiences personalized for end-users’ specific needs and offers double the battery life and breakthrough graphics targeted to new low cost devices such as mobile tablets and all-in-one systems.

The Wall Street Journal online covered related ground from a different perspective when it wrote: PC makers unveiled a range of unconventional devices on the eve of Asia’s biggest computer trade show as they seek to revive (the) flagging industry and stay relevant amid stiff competition. Driven by the cloud and the explosion of mobile devices in a variety of forms the enterprise IT industry doesn’t seem to know what the next device should even be.

Readers once chastised this blogger for suggesting that their next PC might be a mobile phone. Then came smartphones, quickly followed by tablets. Today PC sales are dropping fast, according to IDC.

The next rev of your data center may be based on ARM processors (tiny, stingy with power, cheap, cool, and remarkably fast), essentially mobile phone chips. They could be ganged together in large quantities to deliver mainframe-like power, scalability, and reliability at a fraction of the cost.

IBM has shifted its focus and is targeting cloud computing, mobile, and big data/analytics, even directing its acquisitions toward these areas as witnessed by yesterday’s SoftLayer acquisition. HP, Oracle, most of the other vendors are pursuing variations of the same strategy.  Oracle, for example, acquired Tekelec, a smart device signaling company.

But as the Morgan Stanley analysts noted, it really is Amazon using its cloud scale to savage the traditional enterprise IT vendor hardware strategies and it is no secret why:

  • No upfront investment
  • Pay for Only What You Use (with a caveat or two)
  • Price Transparency
  • Faster Time to Market
  • Near-infinite Scalability and Global Reach

And the more AWS grows, the more its prices drop due to the efficiency of cloud scaling.  It is not clear how the enterprise IT vendors will respond.

What will your management say when they get a whiff of AWS pricing. An extra large, high memory SQL Server database instance lists for $0.74 per hour (check the fine print). What does your Oracle database cost you per hour running on your on-premise enterprise server? That’s what the traditional enterprise IT vendors are facing.

, , , , , , , , , , , , , , , , , , , ,

Leave a comment

Can Flash Replace Hard Disk for Enterprise Storage?

Earlier this month IBM announced a strategic initiative, the IBM FlashSystem, to drive Flash technology deeper into the enterprise. The IBM FlashSystem is a line of all-Flash storage appliances based on technology IBM acquired from Texas Memory Systems.

IBM’s intent over time is to replace hard disk drive (HDD) for enterprise storage with flash. Flash can speed the response of servers and storage systems to data requests from milliseconds to microseconds – an order of magnitude improvement. And because it is all electronic—nothing mechanical involved—and being delivered cost-efficiently at even petabyte scale, it can remake data center economics, especially for transaction-intensive and IOPS-intensive situations.

For example, the IBM FlashSystem 820 is the size of a pizza box but 20x faster than spinning hard drives and can store up to 24 TB of data. The entry-level IBM FlashSystem, with an approximate street price for an entry 820 (10 TB usable RAID 5) runs about $150K (or $15 per gigabyte).  At the high end, you can assemble a 1 PB FlashSystem that fits in one rack and delivers 22 million IOs per second (IOPS). You would need 630 racks of high capacity hard disk drives or 315 racks of performance optimized disk to generate an equal amount of IOPS.

For decades storage economics has been driven by the falling cost per unit of storage, and storage users have benefited from a remarkable ride down the cost curve thanks to Moore’s law. The cost per gigabyte for hard disk drive (HDD) has dropped steadily, year after year. You now can buy a slow, USB-connected 1TB consumer-grade disk drive for under $89!

With low cost per gigabyte storage, storage managers could buy cheap gigabytes, which enabled backup to disk and long-term disk archiving. Yes, tape is even cheaper on a cost per gigabyte basis but it is slow and cumbersome and prone to failure. Today HDD rules.

Silicon-based memory, however, has been riding the Moore’s Law cost slope too. In the last decade memory has emerged as a new storage media through memory-based storage in the form of RAM, DRAM, cache, flash, and solid state disk (SSD) technology. Prohibitively expensive to use for mass storage initially, the magic of Moore’s law combined with other technical advances and mass market efficiencies have made flash something to think about seriously for enterprise production storage.

The IBM FlashSystem changes data center economics. One cloud provider reported deploying 5TB in 3.5 inches of rack space compared to deploying 1300 hard disks to achieve 400k IOPS and it did so at one-tenth the cost.  Overall, Wikibon reports an all Flash approach will lower total system costs by 30%; that’s $4.9m for all flash compared to $7.1m for hard disk.  Specifically, it reduced software license costs 38%, required 17% few servers, and lowered environmental costs by 74% and operational support costs by 35%.  At the same time it boosted storage utilization by 50% while reducing maintenance and simplifying management with corresponding labor savings. Combine flash with compression, deduplication, and thin provisioning and the economics look even better.

For data center managers, this runs counter to everything they learned about the cost of storage. Traditional storage economics starts with the cost of hard disk storage being substantially less than the cost of SSD or Flash on a $/GB basis. Organizations could justify SSD, only by using it in small amounts to tap its sizeable cost/IOPS advantage for IOPS-intensive workloads.

Any HDD price/performance advantage is coming to an end. As reported in PC World, Steve Mills, IBM Senior Vice President noted:  Right now, generic hard drives cost about $2 per gigabyte. An enterprise hard drive will cost about $4 per gigabyte, and a high-performance hard drive will run about $6 per gigabyte. If an organization stripes its data across more disks for better performance, the cost goes up to about $10 per gigabyte. In some cases, where performance is critical, hard-drive costs can skyrocket to $30 or $50 per gigabyte.

From a full systems perspective (TCO for storage) Flash looks increasingly competitive. Said Ambuj Goyal, General Manager, Systems Storage, IBM Systems & Technology Group: “The economics and performance of Flash are at a point where the technology can have a revolutionary impact on enterprises, especially for transaction-intensive applications.” But this actually goes beyond just transactions. Also look at big data analytics workloads, technical computing, and any other IOPS-intensive work.

Almost every major enterprise storage vendor—EMC, NetApp, HP, Dell, Oracle/Sun—is adding SSD to their storage offerings. It is time to start rethinking your view of storage economics when flash can replace HDD and deliver better performance, utilization, and reliability even while reducing server software licensing costs and energy bills.

, , , , , , , , , , , , ,

Leave a comment

Mainframe Workload Economics

IBM never claims that every workload is suitable for the zEnterprise. The company prefers to talk about platform issues in terms of fit-for-purpose or tuned-to-the-task. With the advent of hybrid computing, the low cost z114, and now the expected low cost version of the zEC12 later this year, however, you could make a case for any workload that benefits from the reliability, security, and efficiency of the zEnterprise mainframe is fair game.

John Shedletsky, VP, IBM Competitive Project Office, did not try to make that case. To the contrary, earlier this week he presented the business case for five workloads that are optimum economically and technically on the zEnterprise.  They are:  transaction processing, critical data workloads, batch processing, co-located business analytics, and consolidation-on-one-platform. None of these should be a surprise; possibly with the exception of analytics and consolidated platform they represent traditional mainframe workloads.  BottomlineIT covered Shedletsky’s mainframe cost/workload analysis last year here.

This comes at a time when IBM has started making a lot of noise about new and different workloads on the zEnterprise. Doug Balog, head of IBM System z mainframe group, for example, was quoted widely in the press earlier this month talking about bringing mobile computing workloads to the z. Says Balog in Midsize Insider: “I see there’s a trend in the market we haven’t directly connected to z yet, and that’s this mobile-social platform.”

Actually, this isn’t even all that new either. BottomlineIT’s sister blog, DancingDinosaur, was writing about organizations using SOA to connect CICS apps running on the z to users with mobile devices a few years ago here.

What Shedletsky really demonstrated this week was the cost-efficiency of the zEC12.  In one example he compared a single workload, app production/dev/test running on a 16x, 32-way HP Superdome and an 8x, 48-way Superdome with a zEC12 41-way. The zEC12 delivered the best price/performance by far, $111 million (5yr TCA) for the zEC12 vs. $176 million (5yr TCA) for the two Superdomes.

When running Linux on z workloads with the zEC12 compared to 3 Oracle database workloads (Oracle Enterprise Edition, Oracle RAC, 4 server nodes per cluster) supporting 18K transactions/sec.  running on 12 HP DL580 servers (192 cores) the HP system priced out at $13.2 million (3yr TCA) compared to a zEC12 running 3 Oracle RAC clusters (4 nodes per cluster, each as a Linux guest) with 27 IFLs that priced out at $5.7 million (3yr TCA). The zEC12 came in at less than half the cost.

With analytics such a hot topic these days Shedletsky also presented a comparison of the zEnterprise Analytics System 9700 (zEC12, DB2 v10, z/OS, 1 general processor, 1 zIIP) and an IDAA with a current Teradata machine. The result: the Teradata cost $330K/queries per hour compared to $10K/queries per hour.  Workload time for the Teradata was 1,591 seconds for 9.05 queries per hour compared to 60.98 seconds and 236 queries per hour on the zEC12. The Teradata total cost was $2.9 million compared to $2.3 million for the zEC12.

None of these are what you would consider new workloads, and Shedletsky has yet to apply his cost analysis to mobile or social business workloads. However, the results shouldn’t be much different. Mobile applications, particularly mobile banking and other mobile transaction-oriented applications, will play right into the zEC12 strengths, especially when they are accessing CICS on the back end.

While transaction processing, critical data workloads, batch processing, co-located business analytics, and consolidation-on-one-platform remain the sweet spot for the zEC12, Balog can continue to make his case for mobile and social business on the z. Maybe in the next set of Shedletsky comparative analyses we’ll see some of those workloads come up.

For social business the use cases aren’t quite clear yet. One use case that is emerging, however, is social business big data analytics. Now you can apply the zEC12 to the analytics processing part at least and the efficiencies should be similar.

, , , , , , , , , , , , ,

Leave a comment

New Products Reduce Soaring Storage Costs

The latest EMC-sponsored IDC Digital Universe study projects that the digital universe will reach 40 zettabytes (ZB) by 2020, a 50-fold growth from the beginning of 2010!! Do you wonder why your storage budget keeps increasing? And the amount of data that requires protection—backup on some sort—is growing faster than the digital universe itself.  This clearly is not good for the organization’s storage budget.

Worse yet, from a budget standpoint, the investment on IT hardware, software, services, telecommunications and staff that could be considered the infrastructure of the digital universe will grow by 40% between 2012 and 2020. Investment in storage management, security, big data, and cloud computing will grow considerably faster.

Last July BottomlineIT partially addressed this issue with a piece of reducing your storage debt, here. Recent products from leading storage players promise to help you do it more easily.

Let’s start with EMC, whose most recent storage offering is the VMAX 40K Enterprise Storage System. Enterprise-class, it promises to deliver up to triple the performance and more than twice the usable capacity of any other offering in the Industry, at least that was the case seven months ago. But things change fast.

With the VMAX comes an enhanced storage tool that simplifies and streamlines storage management, enabling fewer administrators to handle more storage. EMC also brings a revamped storage tiering tool, making it easier to move data to less costly and lower performing storage when appropriate. This allows you to conserve your most costly storage for the data most urgently requiring it.

HP, which has been struggling in general through a number of self-inflicted wounds, continues to offer robust storage products. Recognizing that today’s storage challenges—vastly more data, different types of data, and more and different needs for the data—require new approaches HP revamped its Converged Storage architecture. According to an Evaluator Group study many organizations only use 30% of their physical disk capacity, effectively wasting the rest while forcing their admins to wrestle with multiple disparate storage products.

The newest HP storage products address this issue for midsize companies. They include the HP 3PAR StoreServ7000, which offers large enterprise-class storage availability and quality-of-service features at a midrange price point.  HP StoreAll, a scalable platform for object and file data access that provides a simplified environment for big data retention and cloud storage while reducing the need for additional administrators or hardware.  Finally, it introduced the HP StoreAll Express Query, a special data appliance that allows organizations to conduct search queries orders of magnitude faster than previous file system search methods. This expedites informed decision-making based on the most current data.

IBM revamped its storage line too for the same reasons.  Its sleekest offering, especially for midsize companies, is the Storwize V7000 Unified, which handles block and file storage.  It also comes as a blade for IBM’s hybrid (mixed platforms) PureSystems line, the Storwize Flex V7000. Either way it includes IBM’s Real-Time Compression (RtC).

RtC alone can save considerable money by reducing the amount of storage capacity an organization needs to buy, by delaying the need to acquire more storage as the business grows, and by speeding performance of storage-related functions. While other vendors offer compression, none can do what RtC does; it compresses active (production) data and with no impact on application performance. This is an unmatched and valuable achievement.

On top of that the V7000 applies built-in expertise to simplify storage management. It enables an administrator who is not skilled in storage to perform almost all storage tasks quickly, easily, and efficiently. Fewer lesser-skilled administrators can handle increasingly complex storage workloads and perform sophisticated storage tasks flawlessly.  This substantially reduces the large labor cost associated with storage.

NetApp also is addressing the same storage issues for midsize companies through its NetApp FAS3200 Series. With a new processor and memory architecture it promises up to 80% more performance, 100% more capacity, non-disruptive operations, and industry-leading storage efficiency.

Data keeps growing, and you can’t NOT store it. New storage products enable you to maximize storage utilization, optimize the business value from data, and minimize labor costs.

, , , , , , , , ,

Leave a comment

PaaS Gains Cloud Momentum

Guess you could say Gartner is bullish on Platform-as-a-Service (PaaS). The research firm declares: PaaS is a fast-growing core layer of the cloud computing architecture, but the market for PaaS offerings is changing rapidly.

The other layers include Software-as-a-Service (SaaS) and Infrastructure-as-a-Service (IaaS) but before the industry build-out of cloud computing is finished (if ever), expect to see many more X-as-a-Service offerings. Already you can find Backup-as-a-Service (BaaS). Symantec, for instance, offers BaaS to service providers, who will turn around and offer it to their clients.

But the big cloud action is around PaaS. Late in November Red Hat introduced OpenShift Enterprise, an enterprise-ready PaaS product designed to be run as a private, public or hybrid cloud. OpenShift, an open source product, enables organizations to streamline and standardize developer workflows, effectively speeding the delivery of new software to the business.

Previously cloud strategies focused on SaaS, in which organizations access and run software from the cloud. Salesforce.com is probably the most familiar SaaS provider. There also has been strong interest in IaaS, through which organizations augment or even replace their in-house server and storage infrastructure with compute and storage resources from a cloud provider. Here Amazon Web Services is the best known player although it faces considerable competition that is driving down IaaS resource costs to pennies per instance.

PaaS, essentially, is an app dev/deployment and middleware play. It provides a platform (hence the name) to be used by developers in building and deploying applications to the cloud. OpenShift Enterprise does exactly that by giving developers access to a cloud-based application platform on which they can build applications to run in a cloud environment. It automates much of the provisioning and systems management of the application platform stack in a way that frees the IT team to focus on building and deploying new application functionality and not on platform housekeeping and support services. Instead, the PaaS tool takes care of it.

OpenShift Enterprise, for instance, delivers a scalable and fully configured application development, testing and hosting environment. In addition, it uses Security-Enhanced Linux (SELinux) for reliable security and multi-tenancy. It also is built on the full Red Hat open source technology stack including Red Hat Enterprise Linux, JBoss Enterprise Application Platform, and OpenShift Origin, the initial free open source PaaS offering. JBoss Enterprise Application Platform 6, a middleware tool, gives OpenShift Enterprise a Java EE 6-certified on-premise PaaS capability.  As a multi-language PaaS product, OpenShift Enterprise supports Java, Ruby, Python, PHP, and Perl. It also includes what it calls a cartridge capability to enable organizations to include their own middleware service plug-ins as Red Hat cartridges.

Conventional physical app dev is a cumbersome process entailing as many as 20 steps from idea to deployment. Make it a virtual process and you can cut the number of steps down to 14; a small improvement. As Red Hat sees it, the combination of virtualization and PaaS can cut that number of steps to six; idea, budget, code, test, launch, and scale. PaaS, in effect, shifts app dev from a craft undertaking to an automated, cloud-ready assembly line. As such, it enables faster time to market and saves money.

Although Red Hat is well along in the PaaS market and the leader in open source PaaS other vendors already are jumping in and more will be joining them. IBM has SmartCloud Application Services as its PaaS offering.  Oracle offers a PaaS product as part of the Oracle Cloud Platform. EMC offers PaaS consulting and education but not a specific technology product.  When HP identifies PaaS solutions it directs you to its partners. A recent list of the top 20 PaaS vendors identifies mainly smaller players, CA, Google, Microsoft, and Salesforece.com being the exceptions.

A recent study by IDC projects the public cloud services market to hit $98 billion by 2016. The PaaS segment, the fastest growing part, will reach about $10 billion, up from barely $1 billion in 2009. There is a lot of action in the PaaS segment, but if you are looking for the winners, according to IDC, focus on PaaS vendors that provide a comprehensive, consistent, and cost effective platform across all cloud segments (public, private, hybrid). Red Hat OpenShift clearly is one; IBM SmartCloud Application Services and Microsoft Azure certainly will make the cut. Expect others.

, , , , , , , , , , , ,

Leave a comment

Follow

Get every new post delivered to your Inbox.

Join 456 other followers