Posts Tagged zBX

Mainframe Workload Economics

IBM never claims that every workload is suitable for the zEnterprise. The company prefers to talk about platform issues in terms of fit-for-purpose or tuned-to-the-task. With the advent of hybrid computing, the low cost z114, and now the expected low cost version of the zEC12 later this year, however, you could make a case for any workload that benefits from the reliability, security, and efficiency of the zEnterprise mainframe is fair game.

John Shedletsky, VP, IBM Competitive Project Office, did not try to make that case. To the contrary, earlier this week he presented the business case for five workloads that are optimum economically and technically on the zEnterprise.  They are:  transaction processing, critical data workloads, batch processing, co-located business analytics, and consolidation-on-one-platform. None of these should be a surprise; possibly with the exception of analytics and consolidated platform they represent traditional mainframe workloads.  BottomlineIT covered Shedletsky’s mainframe cost/workload analysis last year here.

This comes at a time when IBM has started making a lot of noise about new and different workloads on the zEnterprise. Doug Balog, head of IBM System z mainframe group, for example, was quoted widely in the press earlier this month talking about bringing mobile computing workloads to the z. Says Balog in Midsize Insider: “I see there’s a trend in the market we haven’t directly connected to z yet, and that’s this mobile-social platform.”

Actually, this isn’t even all that new either. BottomlineIT’s sister blog, DancingDinosaur, was writing about organizations using SOA to connect CICS apps running on the z to users with mobile devices a few years ago here.

What Shedletsky really demonstrated this week was the cost-efficiency of the zEC12.  In one example he compared a single workload, app production/dev/test running on a 16x, 32-way HP Superdome and an 8x, 48-way Superdome with a zEC12 41-way. The zEC12 delivered the best price/performance by far, $111 million (5yr TCA) for the zEC12 vs. $176 million (5yr TCA) for the two Superdomes.

When running Linux on z workloads with the zEC12 compared to 3 Oracle database workloads (Oracle Enterprise Edition, Oracle RAC, 4 server nodes per cluster) supporting 18K transactions/sec.  running on 12 HP DL580 servers (192 cores) the HP system priced out at $13.2 million (3yr TCA) compared to a zEC12 running 3 Oracle RAC clusters (4 nodes per cluster, each as a Linux guest) with 27 IFLs that priced out at $5.7 million (3yr TCA). The zEC12 came in at less than half the cost.

With analytics such a hot topic these days Shedletsky also presented a comparison of the zEnterprise Analytics System 9700 (zEC12, DB2 v10, z/OS, 1 general processor, 1 zIIP) and an IDAA with a current Teradata machine. The result: the Teradata cost $330K/queries per hour compared to $10K/queries per hour.  Workload time for the Teradata was 1,591 seconds for 9.05 queries per hour compared to 60.98 seconds and 236 queries per hour on the zEC12. The Teradata total cost was $2.9 million compared to $2.3 million for the zEC12.

None of these are what you would consider new workloads, and Shedletsky has yet to apply his cost analysis to mobile or social business workloads. However, the results shouldn’t be much different. Mobile applications, particularly mobile banking and other mobile transaction-oriented applications, will play right into the zEC12 strengths, especially when they are accessing CICS on the back end.

While transaction processing, critical data workloads, batch processing, co-located business analytics, and consolidation-on-one-platform remain the sweet spot for the zEC12, Balog can continue to make his case for mobile and social business on the z. Maybe in the next set of Shedletsky comparative analyses we’ll see some of those workloads come up.

For social business the use cases aren’t quite clear yet. One use case that is emerging, however, is social business big data analytics. Now you can apply the zEC12 to the analytics processing part at least and the efficiencies should be similar.

Advertisements

, , , , , , , , , , , , ,

Leave a comment

Meet the Newest Mainframe—zEnterprise EC12

Last month IBM launched the zEnterprise EC12 (zEC12). As you would expect from the next release of the top-of-the-line mainframe, the zEC12 delivers faster speed and better price/performance. With a 5.5 GHz core processor, up from 5.2 GHz in its predecessor (z196) and an increase in the number of cores per chip (from 4 to 6) IBM calculates it delivers 50% more total capacity in the same footprint. The vEC12 won’t come cheap but on a cost per MIPS basis it’s probably the best value around.

More than just performance, it adds two major new capabilities, IBM zAware and Flash Express, and a slew of other hardware and software optimizations. The two new features, IBM zAware and Flash Express, both promise to be useful, but neither is a game changer. zAware is an analytics capability embedded in firmware. It is intended to monitor the entire zEnterprise system for the purpose of identifying problems before they impact operations.

Flash Express consists of a pair of memory cards installed in the zEC12; what amounts to a new tier of memory. Flash Express is designed to streamline memory paging when transitioning between workloads. It will moderate workload spikes and eliminate the need to page to disk, which should boost performance.

This machine is intended, initially, for shops with the most demanding workloads and no margin for error. The zEC12 also continues IBM’s hybrid computing thrust by including the zBX and new capabilities from System Director to be delivered through Unified Resource Manager APIs for better management of virtualized servers running on zBX blades.

This is a stunningly powerful machine, especially coming just 25 months after the z196 introduction. The zEC12 is intended for optimized corporate data serving. Its 101 configurable cores deliver a performance boost for all workloads. The zEC12 also comes with the usual array of assist processors, which are just configurable cores with the assist personality loaded on. Since they are zEC12 cores, they bring a 20% MIPS price/performance boost.

The directly competitive alternatives from the other (non-x86) server vendors are pretty slow by comparison. Oracle offers its top SPARC-based T4 server that features a 3.0 GHz processor. HP’s Integrity Superdome comes with the Itanium processor and tops out at 1.86 GHz. No performance rivals here, at least until each vendor refreshes its line.

For performance, IBM estimates up to a 45% improvement in Java workloads, up to a 27% improvement in CPU-intensive integer and floating point C/C++ applications, up to 30% improvement in throughput for DB2 for z/OS operational analytics, and more than 30% improvement in throughput for SAP workloads. IBM has, in effect, optimized the zEC12 from top to bottom of the stack. DB2 applications are certain to benefit as will WebSphere and SAP.

IBM characterizes zEC12 pricing as follows:

  • Hardware—20% MIPS price/performance improvement for standard engines and specialty engines , Flash Express runs $125,000 per pair of cards (3.2 TB)
  • Software—update pricing will provide 2%-7% MLC price/performance for flat-capacity upgrades from z196, and IFLs will maintain PVU rating of 120 for software  yet deliver more 20% MIPS
  • Maintenance—no less than 2% price performance improvement for standard MIPS and 20% on IFL MIPS

IBM is signaling price aggressiveness and flexibility to attract new shops to the mainframe and stimulate new workloads. The deeply discounted Solution Edition program will include the new machine. IBM also is offering financing with deferred payments through the end of the year in a coordinated effort to move these machines now.

As impressive as the zEC12 specifications and price/performance is BottomlineIT is most impressed by the speed at which IBM delivered the machine. It broke with its with its historic 3-year release cycle to deliver this potent hybrid machine 25 months after the z196 first introduced hybrid computing.

, , , , , , , ,

Leave a comment

A Choice of Hybrid Systems

No enterprise data center today runs just one platform. They have Intel/Windows or some flavor(s) of UNIX/Linux as their main production systems, but they generally run a mix of platforms and operating systems, even throwing Apple, VMware, and mainframes into the mix.

Organizations end up with this mix of platforms for perfectly understandable reasons, such as acquisitions or to meet special software requirements, but it results in a certain amount of inefficiency and added cost. For example, you need to hire and retain people with multiple skill sets.

Recognizing that situation—even contributing to it with its array of platforms and operating systems—IBM introduced the concept of hybrid computing in 2010 with the zEnterprise-zBX. Through hybrid computing, an organization could run workloads concurrently on multiple hardware platforms and operating systems while managing them as a single logical system. The benefit: simplified operation and management efficiency.

IBM currently offers two hybrid platforms: the zEnterprise-zBX combination and IBM PureSystems appliances starting with PureFlex and PureApplication. Both hybrid platforms are tightly integrated, highly optimized systems that accept a variety of blades. Although there is platform overlap the two hybrid environments do not support exactly the same operating environments.

For example, PureFlex, an IaaS offering, and PureApplication, a PaaS offering brings IBM System i to the hybrid party along with Power and System x, which are supported by the zBX too, but skips the mainframe’s z/OS and z/VM operating environments. You manage the PureSystems hybrid environment with the Flex System Manager (FSM). The zEnterprise-zBX has its own hybrid management tool, the Unified Resource Manager, which looks very similar to FSM.

Despite the similarities bringing both FSM and the Unified Resource Manager together is not going to happen in any foreseeable future. That is the definitive word from Jeff Frey, IBM Fellow and CTO for System z: “Flex Manager and the Unified Resource Manager will not come together,” he told BottomlineIT.

That does not mean the zEnterprise-zBX and PureSystems won’t play nicely together, but they will do so higher up in the IT stack. “We will federate the management at a higher level,” he said. Today, that pretty much means organizations using both platforms, zEnterprise and PureSystems, will have to rely on tools like Tivoli to tie the pieces together and manage them.  At the lower levels in the stack where the hardware lives each platform will still require its own management tooling.

In effect, Tivoli will provide the federation layer and enable higher level, logical management across both systems. When you need to manage some physical aspect of the underlying hardware you still will need platform-specific tools.

IBM has two potential rivals in the hybrid computing space. Oracle/Sun offers a variety of Sun servers that run either Solaris or Windows/Linux x86 operating systems but it has offered no evidence of any interest to tightly integrate and optimize them as IBM has. Similarly, HP could couple HP-UX and Windows/Linux on both its Intel x86 and Itanium servers, but again it has given no evidence of intending to do this.  Instead, both vendors direct hybrid computing discussions to the cloud, where the different systems can play together at an even higher level of abstraction. (IBM also offers a multi-platform cloud environment.)

Meanwhile, IBM is moving forward with the next advances to its hybrid environments. For example, expect some IBM improvements incorporated into PureSystems hardware to make it into the zBX. Similarly, IBM is planning to push zBX scalability beyond the 112 blades the box supports today as well as adding clustering capabilities. The blade count expansion combined with the technology enhancements brought over from PureSystems, Frey hopes, should make clear IBM’s long term commitment to both its hybrid computing platforms.

At the same time, IBM is enhancing PureSystems for the purpose of scaling it beyond its current four appliance limit. This will give it something more like the ensemble approach used with the System z. A System z ensemble is a collection of two to eight mainframes where at least one has a zBX attached. The resources of a zEnterprise ensemble are managed and virtualized as a single pool of resources integrating system and workload management across the multi-system, multi-tier, multi-architecture environment.

With two IBM hybrid computing platforms the hybrid approach is here for real at IBM. The challenge becomes choosing the one best for your shop. Or you can seek to satisfy your hybrid computing needs through the cloud, where you will find IBM along with Oracle, HP, and a slew of others.

, , , , , , , , , , , ,

2 Comments

IBM Hybrid Computing Choices

Hybrid computing is a concept IBM introduced almost two years ago with the zEnterprise. The idea is that the enterprise can run a variety of workloads on different hardware platforms and manage it all efficiently as a single virtualized system from one console running on the mainframe. In the case of the zEnterprise, an enterprise can mix workloads running on z/OS, Linux, AIX, and Windows on System z, System p, and System x hardware.  The payoff comes from increased resiliency and greater management efficiency.  The cost savings in labor alone could pay for the hybrid computing investment.

If one hybrid computing platform wasn’t enough, IBM now offers a choice of IBM hybrid computing options, the zEnterprise-zBX combination and the new PureSystems family.

Earlier this year, IBM introduced the PureSystems family. At this time there are two PureSystems options: PureFlex, an IaaS offering, and PureApplication, a PaaS offering. IBM implies that more PureSystems will be coming (BottomlineIT’s guess: PureAnalytics and PureTransaction). PureSystems brings System i to the hybrid party along with Power and System x but skips z/OS and z/VM. You manage this hybrid environment with the Flex System Manager (FSM), which looks very similar to the zEnterprise’s Unified Resource Manager. BottomlineIT covered the PureSystems introduction here.

The zEnterprise- zBX combination now encompasses z/OS, Linux on z, z/VM, Power blades, AIX, Linux, System x blades, Windows, and specialty blades.  You can manage the resulting hybrid platform as one hybrid virtualized system through a management console, the Unified Resource Manager. About the only thing missing is IBM’s System i, which is as part of PureSystems.

So now the challenge becomes choosing between two IBM hybrid computing environments that look very similar but aren’t quite the same, at least not yet.  So, which do you use?

Obviously, if you need z/OS, you go with the zEnterprise. It provides the optimum platform for enterprise cloud computing with its extreme scalability and leading security and resiliency. It supports tens of thousands of users while new offerings expand the z role in BI and real time analytics, especially if much of the data reside on the z.

If you must include i you go with the PureFlex. Or, if you find you have a hybrid workload but don’t require the governance and tight integration with the z, you can choose IBM PureSystems and connect it to the zEnterprise via your existing network. Tivoli products can provide the integration of business processes.

If you look at your choice of hybrid computing environments in terms of cost, PureSystems probably will be the less costly option, how much less depends on how it is configured. The entry PureFlex starts at $156k; the standard version, which includes storage and networking, starts at $217k; and the Enterprise version, intended for scalable cloud deployment and included redundancy for resilient operation, starts at $312k. Plus there is the cost of the O/S and hypervisor (BTW, open source KVM is free).

The zEnterprise option will cost more but not necessarily all that much more depending on how you configure it, whether you can take advantage of the deeply discounted System z Solution Edition packages, and how well you negotiate. The lowest cost zEnterprise-zBX hybrid environment includes the z114 ($75k base price but expect to pay more once it is configured), about $200k or more for a zBX, depending on the type and number of blades, plus whatever you need for storage.

The payback from hybrid computing comes mainly from the operational efficiency and labor savings it allows. PureSystems especially come pre-integrated and optimized for the workload and is packed with built-in management expertise and automation that allow fewer, less skilled people to handle the hybrid computing environment.

Right now the wrinkle in the hybrid computing management efficiency story comes from organizations that want both the zEnterprise and PureSystems. This would not be an odd pairing at all, but it will require two different management tools, Flex System Manager for the PureSystems environment and the Unified Resource Manager for the zEnterprise-zBX. At a recent briefing an IBM manager noted that efforts already were underway to bring the two management schemes together although when that actually might happen he couldn’t predict. Let’s hope it will be sooner rather than later.

, , , , , , ,

Leave a comment

Cost per Workload—the New Platform Metric

How best to understand what a computer costs. Total cost of acquisition (TCA) is the price you pay to have it land on the loading dock and get it up and running doing real work. That’s the lowest price, but it is not reflective of what a computer actually costs. Total cost of ownership (TCO) takes the cost of acquisition and adds in the cost of maintenance, support, integration, infrastructure, power/cooling and more for three to five years. Needless to say TCO is higher but more realistic.

BottomlineIT generally shifts the platform cost discussion to  total cost of ownership (TCO) or Fit for Purpose, an IBM approach that looks at the task to which is being applied, the workload. That puts the cost discussion into the context of not just the cost of the hardware/software or the cost of all the additional requirements but  into the context of what you need to achieve what you’re trying to do.  Nobody buys computers at this level for the fun of it.

John Shedletsky, IBM VP of competitive technology, has been dissecting the cost of IBM platforms—the zEnterprise, Power Systems, and distributed x86 platforms—in terms of the workloads being run.  It makes sense; different workloads have different requirements in terms of response or throughput or availability or security or any other number of attributes and will benefit from different machines and configurations.

Most recently, Shedletsky introduced a new workload benchmark for business analytic reports executed in a typical day, called the BI Day Benchmark. Based on Cognos workloads, it looks at the number of queries generated; characterizes them as simple, intermediate, or complex; and scores them in terms of response time, throughput, or an aggregate measure. You can use the resulting data to calculate a cost per workload.

BottomlineIT, as a matter of policy, steers clear of proprietary benchmarks like BI Day.  It is just too difficult to normalize the results across all the variables that can be fudged, making it next to impossible to come up with repeatable results.

A set of cost per workload analyses Shedletsky published back in March here avoids the pitfalls of a proprietary benchmark.  In these analyses he pitted a zEnterprise with a zBX against POWER7 and Intel machines all running multi-core blades.  One analysis looked at running 500 heavy workloads. The hardware and software cost for a system consisting of 56 Intel Blades (8 cores per blade) for a total of 448 cores came to $11.5 million, which worked out to $23k per workload. On the zEnterprise running 192 total cores, the total hardware/software cost was $7.4 million for a cost per workload of $15k. Click on Shedletsky’s report for all the fine print.

Another interesting workload analysis looked at running 28 front end applications.  Here he compared 28 competitive App Server applications on 57 SPARC T3-1B blades with a total of 936 cores at a hardware/software cost of $11.7 million compared to a WebSphere App Server running on 28 POWER7 blades plus 2 Data Power blades in the zBX (zEnterprise) for a total of 224 cores at a hardware/software cost of $4.9 million.  Per workload the zEnterprise cost 58% less.  Again, click on Shedletsky’s report above for the fine print.

Not all of Shedletsky’s analyses come out in favor of IBM’s zEnterprise or even POWER7 systems.  Where they do, however, he makes an interesting observation: since his analyses typically include the full cost of ownership, where z comes out ahead the difference often is not the better platform performance but the cost of labor. He notes that consistent structured zEnterprise management practices consistently combine to lower labor costs.

If fewer people can manage all those blades and cores from a single unified console, the zEnterprise Unified Resource Manager, rather than requiring multiple people learning multiple tools to achieve a comparable level of management, it has to lower the overall cost of operations and the cost per workload.  As much as someone may complain that the entry level zEnterprise, the z114, still starts at $75,000, good administrators cost that much or more.

Shedletsky’s BI Day benchmark may never catch on, but he is correct in that to understand a system’s true cost you have to look at the cost per workload. That is almost sure to lead you to hybrid computing and, particularly, the zEnterprise where you can mix platforms for different workloads running concurrently and manage them all in a structured, consistent way.

, , , , , , , , ,

1 Comment

OVA and oVIRT Drive KVM Success

In the x86 world VMware is the 900-pound hypervisor gorilla. Even Microsoft’s Hyper-V takes a back seat to the VMware hypervisor.  KVM, however, is gaining traction as an open source alternative. Like an open source product, it brings advantages portability, customizability, and low cost.

In terms of overall platform virtualization, the Linux world may be lagging behind Windows in the rate of server virtualization or not, depending on which studies you have been reading.  Regardless, with IBM and Red Hat getting behind the KVM hypervisor in a big way last year, the pace of Linux servers being virtualized should pick up.

The driving of KVM today is being turned over to the Open Virtualization Alliance (OVA), which has made significant gains in attracting participation since its launch last spring. Currently it boasts over 240 members, up from the couple of dozen when BottomlineIT looked at it months ago.

The OVA also has been bolstered by an open virtualization development organization, the oVirt Project here.  Its founding partners include: Canonical, Cisco, IBM, Intel, NetApp, Red Hat and SUSE. The founders promise to deliver a truly open source and openly governed and integrated virtualization stack.  The oVirt team aims to deliver both a cohesive stack and discretely reusable components for open virtualization management, which should become key building blocks for private and public cloud deployments.

 The oVirt Project bills itself as an open virtualization project providing a feature-rich server virtualization management system with advanced capabilities for hosts and guests, including high availability, live migration, storage management, system scheduler, and more. The oVirt goal is to develop a broad ecosystem of tools to make up a complete integrated platform and to deliver them on a well defined release schedule. These are components designed and tested to work together, and oVirt should become a central venue for user and developer cooperation.

The idea around OVA and oVirt is that effective enterprise virtualization requires more than just a hypervisor, noted Jean Staten, IBM Director, Worldwide Cross‐IBM Linux and Open Virtualization, at a recent briefing.  In addition to a feature-rich hypervisor like KVM, Healy cited the need for well-defined APIs at all layers of the stack, readily accessible (reasonably priced) systems and tools, a corresponding feature-rich, heterogeneous management platform, and a robust ecosystem to extend the open hypervisor and management platform, all of which oVirt is tackling.

Now KVM and the OVA just need success cases to demonstrate the technology. Initially, IBM provided the core case experience, its Research Compute Cloud (RC2). RC2 runs over 200 iDataplex nodes, an IBM x86 product using KVM. It handles 2000 concurrent instances, is used by thousands of IBM employees worldwide, and provides 100TB of block storage attached to KVM instances via a storage cloud. RC2 also handles actual IBM internal chargeback based on charges-per-VM hour across IBM.

Today IBM is using KVM with its System z blades in the zBX. It also supports KVM as a tier 1 virtualization technology with IBM System Director VMControl and Tivoli system management products.  On System x, KVM delivered 18% better virtual machine consolidation in a SPECvirt_sc2010 benchmark test.

Recently KVM was adopted by DutchCloud, the leading ISP in Netherlands. DutchCloud is a cloud-based IaaS provider. Companies choose it for QoS, reliability, and low price.

DutchCloud opted for IBM SmartCloud Provisioning as it core delivery platform across multiple server and storage nodes and KVM as the hypervisor for virtual machines. KVM offers both minimal licensing costs and the ability to support mixed (KVM and VMware) deployments.  IBM’s System Director VMControl provides heterogeneous virtual machine management. The combination of KVM and SmartCloud Provisioning enabled DutchCloud to provision hundreds of customer virtual machines in a few minutes and ensure isolation through effective multi-tenancy. And since it can communicate directly with the KVM hypervisor, it avoids the need to license additional management components.

KVM is primarily a distributed x86 Linux platform and cloud play. It may, however, make its way into IBM’s zEnterprise environments through the zBX as the hypervisor for the x86 (IBM eX5) blades residing there.

, , , , , , , , , ,

Leave a comment

Where Best to Run Linux

Mainframe data centers have many platform options for running Linux. The challenge is deciding where: x86 servers and blades, IBM Power Systems, HP Itanium, Oracle/Sun, IBM’s System z or zEnterprise.

Here is what IBM has to say about the various options: System z / zEnterprisePower/System p/System i, andSystem x blades and rack mount servers. And now with the zBX there is yet another option, Linux blades on the zBX. But this doesn’t answer the real question: where should your organization run Linux?

If you have only one platform the answer is simple. Linux has been widely ported. You can probably run it on whatever you already have.

Most organizations today, especially enterprise data centers, have multiple platforms running Windows, Linux, UNIX, AIX, Solaris, and more. And they run these on different hardware platforms from IBM, HP, Oracle/Sun, Dell, and others. Now the decision of where to run Linux gets complicated. The classic consultant/analyst response: it depends.

Again, IBM’s response is to lead the organization through a Fit for Purpose exercise. Here is how IBM discusses the exercise in regard to cloud computing. BottomlineIT’s sister blog addressed Fit for Purpose here last year.

The Fit for Purpose exercise, however, can be reduced to four basic choices:

  1. Where does the data that your Linux applications will use most reside—in general you will get the best end-to-end performance the closer the data is to the applications. So, if your Linux applications need to use DB2 data residing on the mainframe, you probably want the run Linux on the System z or a zBX blade.
  2. Since cost is always an issue look at the price/performance numbers—in this case you have to look at all the costs, paying particular attention to cost in terms of performance delivered. Running Linux on a cheap, underpowered x86 box may cost less but not deliver the performance you want.
  3. Available skills—here you need to look at where the Linux and platform skills available to you fall and opt for the platform where you have the most skills availability. Of course, a relatively modest investment in training can pay big dividends in this area.
  4. IT culture—to avoid resistance even if the data proximity or price/performance considerations fall one way, many might opt for the platform favored by the dominant IT culture.

Further complicating the decision is the lack of good data available on both the cost and the performance of Linux on the zBX or its Linux blades. Plus there are other variables to consider, such as whether you run Linux on an IFL on the z with or without z/VM. Similarly, you can run Linux on an x-platform with or without VMware or some other hypervisor. These choices will impact price, performance, and skills.

Although the answer to the question of where to run Linux may not be as simple as many would like, DancingDinosaur believes as a general principle it is always better to have choices.

, , , , , ,

Leave a comment