Posts Tagged TCO

Best TCO—System z vs. x86 vs. Public Cloud

IBM recently analyzed various likely customer workload scenarios and found that the System z as an enterprise Linux server could consistently beat x86 machines and even public cloud providers like AWS in terms of TCO.  The analysis was reasonably evenhanded although, like automobile mileage ratings, your actual results may vary.

This blogger has long contended that the z Enterprise Linux Server acquired under the deeply discounted IBM System z Solution Edition program could beat comparable x86 systems not only in terms of TCO but even TCA. Algar, a Brazilian telecom, acquired its initial zEnterprise Linux server to consolidate a slew of x86 systems and lay a foundation for scalable growth. It reports cutting data center costs by 70%. Nationwide Insurance, no newcomer to mainframe computing, used the zEnterprise to consolidate Linux servers, achieving $46 million in savings.

The point: the latest IBM TCO analyses confirm what IBM has been saying. TCO advantage, IBM found, switches to the z Enterprise Linux Server at around 200 virtual servers compared to the public cloud and a bit more VMs compared to x86 machines. View the IBM z TCO presentation here.

IBM further advanced its cause in the TCO/TCA battle with the recent introduction of the IBM Enterprise Cloud System. This is a factory-built and integrated system—processor, memory, network, IFLs, virtualization management, cloud management, hypervisor, disk orchestration, Linux OS—priced (discounted) as a single solution. IBM promises to deliver it in 45 days and have it production ready within hours of hitting the organization’s loading dock. Of course, it comes with the scalability, availability, security, manageability, etc. long associated with the z, and IBM reports it can scale to 6000 VMs. Not sure how this compares in price to a Solution Edition Enterprise Linux Server.

The IBM TCO analysis compared the public cloud, x86 cloud, and the Enterprise Cloud System in terms power and space, labor, software/middleware, and hardware costs when running 48 diverse (a range of low, medium, and high I/O) workloads. In general it found an advantage for the z Enterprise Cloud System of 34-73%.  The z cost considerably more in terms of hardware but it more than made up for it in terms of software, labor, and power. Overall, the TCO examined more than 30 cost variables, ranging from blade/IFL/memory/storage amounts to hypervisor/cloud management/middleware maintenance.

In terms of hardware, the z included the Enterprise Linux Server, storage, z/VM, and IBM Wave for z/VM. Software included WebSphere Application Server middleware, Cloud Management Suite for z, and Tivoli for z/VM. The x86 cloud included HP hardware with a hypervisor, WebSphere Application Server, SmartCloud Orchestrator, SmartCloud Monitoring, and Tivoli Storage Manager EE. Both analyses included labor to manage both hardware and VMs, power and space costs, and SUSE Linux.

The public cloud assumptions were a little different. Each workload was deployed as a separate instance. The pricing model was for reserved instances. Hardware costs were based on instances in east US region with SUSE, EBS volume, data in/out, support (enterprise), free and reserved tier discounts applied. Software costs included WebSphere Application Server ND (middleware) costs for instances. A labor cost was included for managing instance.

When IBM applied its analysis to 398 I/O-diverse workloads the results were similar, 49-75% lower cost with the Cloud System on z. Again, z hardware was considerably more costly than either x86 or the public cloud. But z software and labor was far less than the others. In terms of 3-year TCO, the cloud was the highest at $37 M, x86 came in at $18.3 M, and the Cloud on z cost $9.4 M. With 48 workloads, the z again came in with lowest TCO at $1 M compared to $1.6 M for x86 systems, and $3.9 M for the public cloud.

IBM tried to keep the assumptions equivalent across the platforms. If you make different software or middleware choices or a different mix of high-mid-low I/O workloads your results will be different but the rankings probably won’t change all that much.

Also, there still is time to register for IBM Edge2014 in Las Vegas. This blogger will be there hanging around the bloggers lounge when not attending sessions. Please join me there.

Follow Alan Radding/BottomlineIT on Twitter: @mainframeblog

, , , , , , , , , , , , , , , , , , , , , , , , ,

Leave a comment

Lessons from IBM Eagle TCO Analyses

A company running an obsolete z890 mainframe with what amounted to 0.88 processors (332 MIPS) planned a migration to a distributed system consisted of 36 distributed UNIX servers. The production workload consisted of applications, database, testing, development, security, and more.  Five years later, the company was running the same in the 36-server, multi-core (41x more cores than the z890) distributed environment except that its 4-year TCO went from $4.9 million to $17.9 million based on an IBM Eagle study.  The lesson, the Eagle team notes: cores drive platform costs in distributed systems.

Then there is the case of a 3500 MIPS mainframe shop that budgeted $10 million for a 1-year migration to a distributed environment. Eighteen months into the project, now six months behind schedule, the company had spent $25 million and only managed to offload 350 MIPS. In addition, it had to increase staff to cover the  over-run, implement steps to replace mainframe automation, had to acquire additional distributed capacity over the initial prediction (to support only 10% of total MIPS offloaded), and had to extend the period of running the old and the new systems in parallel at even more cost due to the schedule overrun. Not surprisingly, the executive sponsor is gone.

If the goal of a migration to the distributed environment is cost savings, the IBM Eagle team has concluded after 3 years of doing such analyses, most migrations are a failure. Read the Eagle FAQ here.

The Eagle TCO team was formed in 2007 and since then reports completing over 300 user studies.  Often its studies are used to determine the best platform among IBM’s various choices for a given set of workloads, usually as part of a Fit for Purpose. In other cases, the Eagle analysis is aimed at enabling a System z shop to avoid a migration to a distributed platform. The Eagle team, in fact, is platform agnostic until it completes its quantitative analysis, when the resulting numbers generally make the decisions clear.

Along the way, the Eagle team has learned a few lessons.  For example:  re-hosting projects tend to be larger than anticipated. The typical one-year projection will likely turn into a two- or three-year project.

The Eagle team also offers the following tips, which can help existing shops that aren’t necessarily looking to migrate but just want to minimize costs:

  • Update hardware and software; new systems generally are more cost-efficient. For example one bank upgraded from z/OS 1.6 to 1.8 and reduced each LPAR’s MIPS by 5% (monthly software cost savings paid for the upgrade almost immediately)
  • Schedule workloads to take advantage of sub-capacity software pricing for platforms that offer it, which may produce free workloads
  • Consolidate workloads on Linux, which invariably saves money, especially when consolidating many Linux virtual servers on a mainframe IFL. (A recent debate raged on LinkedIn focused on how many virtual instances can run on an IFL with some suggesting a max of 20. The official IBM figure:  you can consolidate up to 60 distributed cores or more on a single System z core; a single System z core = an IFL.)
  • Changing the database can impact capacity requirements and therefore costs, resulting in lower hardware and software costs
  • Consider the  IBM mainframe Solution Edition program, which is the best mainframe deal going, enabling you to acquire a new mainframe for workloads you’ve never run on a mainframe for a deeply discounted package price including hardware, software, middleware, and 3 years of maintenance.

 BottomlineIT generally is skeptical of TCO analyses from vendors. To be useful the analysis needs to include full context, technical details (components, release levels, and prices), and specific quantified benchmark results.  In addition, there are soft costs that must be considered.  Eagle analyses generally do that.

In the end, the lowest acquisition cost or even the lowest TCO isn’t necessarily the best platform choice for a given situation or workload. Determining the right platform requires both quantifiable analysis and judgment.

, , , , , , , , , ,

Leave a comment

Cost per Workload—the New Platform Metric

How best to understand what a computer costs. Total cost of acquisition (TCA) is the price you pay to have it land on the loading dock and get it up and running doing real work. That’s the lowest price, but it is not reflective of what a computer actually costs. Total cost of ownership (TCO) takes the cost of acquisition and adds in the cost of maintenance, support, integration, infrastructure, power/cooling and more for three to five years. Needless to say TCO is higher but more realistic.

BottomlineIT generally shifts the platform cost discussion to  total cost of ownership (TCO) or Fit for Purpose, an IBM approach that looks at the task to which is being applied, the workload. That puts the cost discussion into the context of not just the cost of the hardware/software or the cost of all the additional requirements but  into the context of what you need to achieve what you’re trying to do.  Nobody buys computers at this level for the fun of it.

John Shedletsky, IBM VP of competitive technology, has been dissecting the cost of IBM platforms—the zEnterprise, Power Systems, and distributed x86 platforms—in terms of the workloads being run.  It makes sense; different workloads have different requirements in terms of response or throughput or availability or security or any other number of attributes and will benefit from different machines and configurations.

Most recently, Shedletsky introduced a new workload benchmark for business analytic reports executed in a typical day, called the BI Day Benchmark. Based on Cognos workloads, it looks at the number of queries generated; characterizes them as simple, intermediate, or complex; and scores them in terms of response time, throughput, or an aggregate measure. You can use the resulting data to calculate a cost per workload.

BottomlineIT, as a matter of policy, steers clear of proprietary benchmarks like BI Day.  It is just too difficult to normalize the results across all the variables that can be fudged, making it next to impossible to come up with repeatable results.

A set of cost per workload analyses Shedletsky published back in March here avoids the pitfalls of a proprietary benchmark.  In these analyses he pitted a zEnterprise with a zBX against POWER7 and Intel machines all running multi-core blades.  One analysis looked at running 500 heavy workloads. The hardware and software cost for a system consisting of 56 Intel Blades (8 cores per blade) for a total of 448 cores came to $11.5 million, which worked out to $23k per workload. On the zEnterprise running 192 total cores, the total hardware/software cost was $7.4 million for a cost per workload of $15k. Click on Shedletsky’s report for all the fine print.

Another interesting workload analysis looked at running 28 front end applications.  Here he compared 28 competitive App Server applications on 57 SPARC T3-1B blades with a total of 936 cores at a hardware/software cost of $11.7 million compared to a WebSphere App Server running on 28 POWER7 blades plus 2 Data Power blades in the zBX (zEnterprise) for a total of 224 cores at a hardware/software cost of $4.9 million.  Per workload the zEnterprise cost 58% less.  Again, click on Shedletsky’s report above for the fine print.

Not all of Shedletsky’s analyses come out in favor of IBM’s zEnterprise or even POWER7 systems.  Where they do, however, he makes an interesting observation: since his analyses typically include the full cost of ownership, where z comes out ahead the difference often is not the better platform performance but the cost of labor. He notes that consistent structured zEnterprise management practices consistently combine to lower labor costs.

If fewer people can manage all those blades and cores from a single unified console, the zEnterprise Unified Resource Manager, rather than requiring multiple people learning multiple tools to achieve a comparable level of management, it has to lower the overall cost of operations and the cost per workload.  As much as someone may complain that the entry level zEnterprise, the z114, still starts at $75,000, good administrators cost that much or more.

Shedletsky’s BI Day benchmark may never catch on, but he is correct in that to understand a system’s true cost you have to look at the cost per workload. That is almost sure to lead you to hybrid computing and, particularly, the zEnterprise where you can mix platforms for different workloads running concurrently and manage them all in a structured, consistent way.

, , , , , , , , ,

1 Comment