Posts Tagged hypervisor

Best TCO—System z vs. x86 vs. Public Cloud

IBM recently analyzed various likely customer workload scenarios and found that the System z as an enterprise Linux server could consistently beat x86 machines and even public cloud providers like AWS in terms of TCO.  The analysis was reasonably evenhanded although, like automobile mileage ratings, your actual results may vary.

This blogger has long contended that the z Enterprise Linux Server acquired under the deeply discounted IBM System z Solution Edition program could beat comparable x86 systems not only in terms of TCO but even TCA. Algar, a Brazilian telecom, acquired its initial zEnterprise Linux server to consolidate a slew of x86 systems and lay a foundation for scalable growth. It reports cutting data center costs by 70%. Nationwide Insurance, no newcomer to mainframe computing, used the zEnterprise to consolidate Linux servers, achieving $46 million in savings.

The point: the latest IBM TCO analyses confirm what IBM has been saying. TCO advantage, IBM found, switches to the z Enterprise Linux Server at around 200 virtual servers compared to the public cloud and a bit more VMs compared to x86 machines. View the IBM z TCO presentation here.

IBM further advanced its cause in the TCO/TCA battle with the recent introduction of the IBM Enterprise Cloud System. This is a factory-built and integrated system—processor, memory, network, IFLs, virtualization management, cloud management, hypervisor, disk orchestration, Linux OS—priced (discounted) as a single solution. IBM promises to deliver it in 45 days and have it production ready within hours of hitting the organization’s loading dock. Of course, it comes with the scalability, availability, security, manageability, etc. long associated with the z, and IBM reports it can scale to 6000 VMs. Not sure how this compares in price to a Solution Edition Enterprise Linux Server.

The IBM TCO analysis compared the public cloud, x86 cloud, and the Enterprise Cloud System in terms power and space, labor, software/middleware, and hardware costs when running 48 diverse (a range of low, medium, and high I/O) workloads. In general it found an advantage for the z Enterprise Cloud System of 34-73%.  The z cost considerably more in terms of hardware but it more than made up for it in terms of software, labor, and power. Overall, the TCO examined more than 30 cost variables, ranging from blade/IFL/memory/storage amounts to hypervisor/cloud management/middleware maintenance.

In terms of hardware, the z included the Enterprise Linux Server, storage, z/VM, and IBM Wave for z/VM. Software included WebSphere Application Server middleware, Cloud Management Suite for z, and Tivoli for z/VM. The x86 cloud included HP hardware with a hypervisor, WebSphere Application Server, SmartCloud Orchestrator, SmartCloud Monitoring, and Tivoli Storage Manager EE. Both analyses included labor to manage both hardware and VMs, power and space costs, and SUSE Linux.

The public cloud assumptions were a little different. Each workload was deployed as a separate instance. The pricing model was for reserved instances. Hardware costs were based on instances in east US region with SUSE, EBS volume, data in/out, support (enterprise), free and reserved tier discounts applied. Software costs included WebSphere Application Server ND (middleware) costs for instances. A labor cost was included for managing instance.

When IBM applied its analysis to 398 I/O-diverse workloads the results were similar, 49-75% lower cost with the Cloud System on z. Again, z hardware was considerably more costly than either x86 or the public cloud. But z software and labor was far less than the others. In terms of 3-year TCO, the cloud was the highest at $37 M, x86 came in at $18.3 M, and the Cloud on z cost $9.4 M. With 48 workloads, the z again came in with lowest TCO at $1 M compared to $1.6 M for x86 systems, and $3.9 M for the public cloud.

IBM tried to keep the assumptions equivalent across the platforms. If you make different software or middleware choices or a different mix of high-mid-low I/O workloads your results will be different but the rankings probably won’t change all that much.

Also, there still is time to register for IBM Edge2014 in Las Vegas. This blogger will be there hanging around the bloggers lounge when not attending sessions. Please join me there.

Follow Alan Radding/BottomlineIT on Twitter: @mainframeblog

, , , , , , , , , , , , , , , , , , , , , , , , ,

Leave a comment

Where Best to Run Linux

Mainframe data centers have many platform options for running Linux. The challenge is deciding where: x86 servers and blades, IBM Power Systems, HP Itanium, Oracle/Sun, IBM’s System z or zEnterprise.

Here is what IBM has to say about the various options: System z / zEnterprisePower/System p/System i, andSystem x blades and rack mount servers. And now with the zBX there is yet another option, Linux blades on the zBX. But this doesn’t answer the real question: where should your organization run Linux?

If you have only one platform the answer is simple. Linux has been widely ported. You can probably run it on whatever you already have.

Most organizations today, especially enterprise data centers, have multiple platforms running Windows, Linux, UNIX, AIX, Solaris, and more. And they run these on different hardware platforms from IBM, HP, Oracle/Sun, Dell, and others. Now the decision of where to run Linux gets complicated. The classic consultant/analyst response: it depends.

Again, IBM’s response is to lead the organization through a Fit for Purpose exercise. Here is how IBM discusses the exercise in regard to cloud computing. BottomlineIT’s sister blog addressed Fit for Purpose here last year.

The Fit for Purpose exercise, however, can be reduced to four basic choices:

  1. Where does the data that your Linux applications will use most reside—in general you will get the best end-to-end performance the closer the data is to the applications. So, if your Linux applications need to use DB2 data residing on the mainframe, you probably want the run Linux on the System z or a zBX blade.
  2. Since cost is always an issue look at the price/performance numbers—in this case you have to look at all the costs, paying particular attention to cost in terms of performance delivered. Running Linux on a cheap, underpowered x86 box may cost less but not deliver the performance you want.
  3. Available skills—here you need to look at where the Linux and platform skills available to you fall and opt for the platform where you have the most skills availability. Of course, a relatively modest investment in training can pay big dividends in this area.
  4. IT culture—to avoid resistance even if the data proximity or price/performance considerations fall one way, many might opt for the platform favored by the dominant IT culture.

Further complicating the decision is the lack of good data available on both the cost and the performance of Linux on the zBX or its Linux blades. Plus there are other variables to consider, such as whether you run Linux on an IFL on the z with or without z/VM. Similarly, you can run Linux on an x-platform with or without VMware or some other hypervisor. These choices will impact price, performance, and skills.

Although the answer to the question of where to run Linux may not be as simple as many would like, DancingDinosaur believes as a general principle it is always better to have choices.

, , , , , ,

Leave a comment

Open Source KVM Takes on the Hypervisor Leaders

The hypervisor—software that allocates and manages virtualized system resources—usually is the first thing that comes to mind when virtualization comes up. And when IT considers server virtualization the first option typically is VMware ESX, followed by Microsoft’s Hyper-V.

But that shouldn’t be the whole story. Even in the Windows/Intel world there are other hypervisors, such as Citrix Xen.  And IBM has had hypervisor technology for its mainframes for decades and for its Power systems since the late 1990s. A mainframe (System z) running IBM’s System z hypervisor, z/VM, can handle over 1000 virtual machines while delivering top performance and reliability.

So, it was significant when IBM announced in early May that it and Red Hat, an open source technology leader, are working together to make products built around the Kernel-based Virtual Machine (KVM) open source hypervisor for the enterprise. Jean Staten Healy, IBM’s Director of Worldwide Cross-IBM Linux, told IT industry analysts that the two companies together are committed to driving adoption of the open source virtualization technology through joint development projects and enablement of the KVM ecosystem.

Differentiating this approach from those taken by the current x86 virtualization leaders VMware and Microsoft is open source technology. An open source approach to virtualization, Healy noted, lowers costs, enables greater interoperability, and increases options through multiple sources.

The KVM open source hypervisor allows a business to create multiple virtual versions of Linux and Windows environments on the same server. Larger enterprises can take KVM-based products and combine them with comprehensive management capabilities to create highly scalable and reliable, fully cloud-capable systems that enable the consolidation and sharing of massive numbers of virtualized applications and servers.

Red Hat Enterprise Virtualization, for example, was designed for large scale datacenter virtualization by pairing its centralized virtualization management system and advanced features with the KVM hypervisor. BottomlineIT looked at the Red Hat open source approach a few weeks ago, here.

The open source approach to virtualization also is starting to gain traction. To that end Red Hat, IBM, BMC, HP, Intel, and others joined to form the Open Virtualization Alliance. Its goal is to facilitate  the adoption of open virtualization technologies, especially KVM. It intends do this by promoting examples of customer successes, encourage interoperability, and accelerate the expansion of the ecosystem of third party solutions around KVM. A growing and robust ecosystem around KVM is essential if the open source hypervisor is to effectively rival VMware and Microsoft.

Healy introduced what might be considered the Alliance’s first KVM enterprise-scale success story, IBM’s own Research Compute Cloud (RC2), which is the first large-scale cloud deployed within IBM. In addition to being a proving ground for KVM, RC2 also handles actual IBM internal chargeback based on charges-per-VM hour across IBM. That’s real business work.

RC2 runs over 200 iDataplex nodes, an IBM x86 product, using KVM (90% memory utilization/node). It runs 2000 concurrent instances, is used by thousands of IBM employees worldwide, and provides 100TB of block storage attached to KVM instances via a storage cloud.

KVM was chosen not only to demonstrate the open source hypervisor but because it was particularly well suited to the enterprise challenge. It provides a predictable and familiar environment that required no additional skills, auditable security compliance, and the open source licensing model that kept costs down and would prove cost-effective for large-scale cloud use, which won’t be long in coming. The RC2 team, it seems, already is preparing live migration plans for support of federated clouds. Stay tuned.

, , , , , , , , , , , , , , , ,

1 Comment