Posts Tagged zEnterprise

Best TCO—System z vs. x86 vs. Public Cloud

IBM recently analyzed various likely customer workload scenarios and found that the System z as an enterprise Linux server could consistently beat x86 machines and even public cloud providers like AWS in terms of TCO.  The analysis was reasonably evenhanded although, like automobile mileage ratings, your actual results may vary.

This blogger has long contended that the z Enterprise Linux Server acquired under the deeply discounted IBM System z Solution Edition program could beat comparable x86 systems not only in terms of TCO but even TCA. Algar, a Brazilian telecom, acquired its initial zEnterprise Linux server to consolidate a slew of x86 systems and lay a foundation for scalable growth. It reports cutting data center costs by 70%. Nationwide Insurance, no newcomer to mainframe computing, used the zEnterprise to consolidate Linux servers, achieving $46 million in savings.

The point: the latest IBM TCO analyses confirm what IBM has been saying. TCO advantage, IBM found, switches to the z Enterprise Linux Server at around 200 virtual servers compared to the public cloud and a bit more VMs compared to x86 machines. View the IBM z TCO presentation here.

IBM further advanced its cause in the TCO/TCA battle with the recent introduction of the IBM Enterprise Cloud System. This is a factory-built and integrated system—processor, memory, network, IFLs, virtualization management, cloud management, hypervisor, disk orchestration, Linux OS—priced (discounted) as a single solution. IBM promises to deliver it in 45 days and have it production ready within hours of hitting the organization’s loading dock. Of course, it comes with the scalability, availability, security, manageability, etc. long associated with the z, and IBM reports it can scale to 6000 VMs. Not sure how this compares in price to a Solution Edition Enterprise Linux Server.

The IBM TCO analysis compared the public cloud, x86 cloud, and the Enterprise Cloud System in terms power and space, labor, software/middleware, and hardware costs when running 48 diverse (a range of low, medium, and high I/O) workloads. In general it found an advantage for the z Enterprise Cloud System of 34-73%.  The z cost considerably more in terms of hardware but it more than made up for it in terms of software, labor, and power. Overall, the TCO examined more than 30 cost variables, ranging from blade/IFL/memory/storage amounts to hypervisor/cloud management/middleware maintenance.

In terms of hardware, the z included the Enterprise Linux Server, storage, z/VM, and IBM Wave for z/VM. Software included WebSphere Application Server middleware, Cloud Management Suite for z, and Tivoli for z/VM. The x86 cloud included HP hardware with a hypervisor, WebSphere Application Server, SmartCloud Orchestrator, SmartCloud Monitoring, and Tivoli Storage Manager EE. Both analyses included labor to manage both hardware and VMs, power and space costs, and SUSE Linux.

The public cloud assumptions were a little different. Each workload was deployed as a separate instance. The pricing model was for reserved instances. Hardware costs were based on instances in east US region with SUSE, EBS volume, data in/out, support (enterprise), free and reserved tier discounts applied. Software costs included WebSphere Application Server ND (middleware) costs for instances. A labor cost was included for managing instance.

When IBM applied its analysis to 398 I/O-diverse workloads the results were similar, 49-75% lower cost with the Cloud System on z. Again, z hardware was considerably more costly than either x86 or the public cloud. But z software and labor was far less than the others. In terms of 3-year TCO, the cloud was the highest at $37 M, x86 came in at $18.3 M, and the Cloud on z cost $9.4 M. With 48 workloads, the z again came in with lowest TCO at $1 M compared to $1.6 M for x86 systems, and $3.9 M for the public cloud.

IBM tried to keep the assumptions equivalent across the platforms. If you make different software or middleware choices or a different mix of high-mid-low I/O workloads your results will be different but the rankings probably won’t change all that much.

Also, there still is time to register for IBM Edge2014 in Las Vegas. This blogger will be there hanging around the bloggers lounge when not attending sessions. Please join me there.

Follow Alan Radding/BottomlineIT on Twitter: @mainframeblog

Advertisements

, , , , , , , , , , , , , , , , , , , , , , , , ,

Leave a comment

Lessons from IBM Eagle TCO Analyses

A company running an obsolete z890 mainframe with what amounted to 0.88 processors (332 MIPS) planned a migration to a distributed system consisted of 36 distributed UNIX servers. The production workload consisted of applications, database, testing, development, security, and more.  Five years later, the company was running the same in the 36-server, multi-core (41x more cores than the z890) distributed environment except that its 4-year TCO went from $4.9 million to $17.9 million based on an IBM Eagle study.  The lesson, the Eagle team notes: cores drive platform costs in distributed systems.

Then there is the case of a 3500 MIPS mainframe shop that budgeted $10 million for a 1-year migration to a distributed environment. Eighteen months into the project, now six months behind schedule, the company had spent $25 million and only managed to offload 350 MIPS. In addition, it had to increase staff to cover the  over-run, implement steps to replace mainframe automation, had to acquire additional distributed capacity over the initial prediction (to support only 10% of total MIPS offloaded), and had to extend the period of running the old and the new systems in parallel at even more cost due to the schedule overrun. Not surprisingly, the executive sponsor is gone.

If the goal of a migration to the distributed environment is cost savings, the IBM Eagle team has concluded after 3 years of doing such analyses, most migrations are a failure. Read the Eagle FAQ here.

The Eagle TCO team was formed in 2007 and since then reports completing over 300 user studies.  Often its studies are used to determine the best platform among IBM’s various choices for a given set of workloads, usually as part of a Fit for Purpose. In other cases, the Eagle analysis is aimed at enabling a System z shop to avoid a migration to a distributed platform. The Eagle team, in fact, is platform agnostic until it completes its quantitative analysis, when the resulting numbers generally make the decisions clear.

Along the way, the Eagle team has learned a few lessons.  For example:  re-hosting projects tend to be larger than anticipated. The typical one-year projection will likely turn into a two- or three-year project.

The Eagle team also offers the following tips, which can help existing shops that aren’t necessarily looking to migrate but just want to minimize costs:

  • Update hardware and software; new systems generally are more cost-efficient. For example one bank upgraded from z/OS 1.6 to 1.8 and reduced each LPAR’s MIPS by 5% (monthly software cost savings paid for the upgrade almost immediately)
  • Schedule workloads to take advantage of sub-capacity software pricing for platforms that offer it, which may produce free workloads
  • Consolidate workloads on Linux, which invariably saves money, especially when consolidating many Linux virtual servers on a mainframe IFL. (A recent debate raged on LinkedIn focused on how many virtual instances can run on an IFL with some suggesting a max of 20. The official IBM figure:  you can consolidate up to 60 distributed cores or more on a single System z core; a single System z core = an IFL.)
  • Changing the database can impact capacity requirements and therefore costs, resulting in lower hardware and software costs
  • Consider the  IBM mainframe Solution Edition program, which is the best mainframe deal going, enabling you to acquire a new mainframe for workloads you’ve never run on a mainframe for a deeply discounted package price including hardware, software, middleware, and 3 years of maintenance.

 BottomlineIT generally is skeptical of TCO analyses from vendors. To be useful the analysis needs to include full context, technical details (components, release levels, and prices), and specific quantified benchmark results.  In addition, there are soft costs that must be considered.  Eagle analyses generally do that.

In the end, the lowest acquisition cost or even the lowest TCO isn’t necessarily the best platform choice for a given situation or workload. Determining the right platform requires both quantifiable analysis and judgment.

, , , , , , , , , ,

Leave a comment

Mainframe Workload Economics

IBM never claims that every workload is suitable for the zEnterprise. The company prefers to talk about platform issues in terms of fit-for-purpose or tuned-to-the-task. With the advent of hybrid computing, the low cost z114, and now the expected low cost version of the zEC12 later this year, however, you could make a case for any workload that benefits from the reliability, security, and efficiency of the zEnterprise mainframe is fair game.

John Shedletsky, VP, IBM Competitive Project Office, did not try to make that case. To the contrary, earlier this week he presented the business case for five workloads that are optimum economically and technically on the zEnterprise.  They are:  transaction processing, critical data workloads, batch processing, co-located business analytics, and consolidation-on-one-platform. None of these should be a surprise; possibly with the exception of analytics and consolidated platform they represent traditional mainframe workloads.  BottomlineIT covered Shedletsky’s mainframe cost/workload analysis last year here.

This comes at a time when IBM has started making a lot of noise about new and different workloads on the zEnterprise. Doug Balog, head of IBM System z mainframe group, for example, was quoted widely in the press earlier this month talking about bringing mobile computing workloads to the z. Says Balog in Midsize Insider: “I see there’s a trend in the market we haven’t directly connected to z yet, and that’s this mobile-social platform.”

Actually, this isn’t even all that new either. BottomlineIT’s sister blog, DancingDinosaur, was writing about organizations using SOA to connect CICS apps running on the z to users with mobile devices a few years ago here.

What Shedletsky really demonstrated this week was the cost-efficiency of the zEC12.  In one example he compared a single workload, app production/dev/test running on a 16x, 32-way HP Superdome and an 8x, 48-way Superdome with a zEC12 41-way. The zEC12 delivered the best price/performance by far, $111 million (5yr TCA) for the zEC12 vs. $176 million (5yr TCA) for the two Superdomes.

When running Linux on z workloads with the zEC12 compared to 3 Oracle database workloads (Oracle Enterprise Edition, Oracle RAC, 4 server nodes per cluster) supporting 18K transactions/sec.  running on 12 HP DL580 servers (192 cores) the HP system priced out at $13.2 million (3yr TCA) compared to a zEC12 running 3 Oracle RAC clusters (4 nodes per cluster, each as a Linux guest) with 27 IFLs that priced out at $5.7 million (3yr TCA). The zEC12 came in at less than half the cost.

With analytics such a hot topic these days Shedletsky also presented a comparison of the zEnterprise Analytics System 9700 (zEC12, DB2 v10, z/OS, 1 general processor, 1 zIIP) and an IDAA with a current Teradata machine. The result: the Teradata cost $330K/queries per hour compared to $10K/queries per hour.  Workload time for the Teradata was 1,591 seconds for 9.05 queries per hour compared to 60.98 seconds and 236 queries per hour on the zEC12. The Teradata total cost was $2.9 million compared to $2.3 million for the zEC12.

None of these are what you would consider new workloads, and Shedletsky has yet to apply his cost analysis to mobile or social business workloads. However, the results shouldn’t be much different. Mobile applications, particularly mobile banking and other mobile transaction-oriented applications, will play right into the zEC12 strengths, especially when they are accessing CICS on the back end.

While transaction processing, critical data workloads, batch processing, co-located business analytics, and consolidation-on-one-platform remain the sweet spot for the zEC12, Balog can continue to make his case for mobile and social business on the z. Maybe in the next set of Shedletsky comparative analyses we’ll see some of those workloads come up.

For social business the use cases aren’t quite clear yet. One use case that is emerging, however, is social business big data analytics. Now you can apply the zEC12 to the analytics processing part at least and the efficiencies should be similar.

, , , , , , , , , , , , ,

Leave a comment

Winning the Coming Talent War Mainframe Style

The next frontier in the ongoing talent war, according to McKinsey, will be deep analytics, a critical weapon required to probe big data in the competition underpinning new waves of productivity, growth, and innovation. Are you ready to compete and win in this technical talent war?

Similarly, Information Week contends that data expertise is called for to take advantage of data mining, text mining, forecasting, and machine learning techniques. As it turns out the mainframe is ideally is ideally positioned to win if you can attract the right talent.

Finding, hiring, and keeping good talent within the technology realm is the number one concern cited by 41% of senior executives, hiring managers, and team leaders responding to the latest Harris Allied Tech Hiring and Retention Survey. Retention of existing talent was the next biggest concern, cited by 19.1%.

This past fall, CA published the results of its latest mainframe survey that came to similar conclusions. It found three major trends on the current and future role of the mainframe:

  1. The mainframe is playing an increasingly strategic role in managing the evolving needs of the enterprise
  2. The mainframe as an enabler of innovation as big data and cloud computing transform the face of enterprise IT
  3. Demand for tech talent with cross-disciplinary skills to fill critical mainframe workforce needs in this new view of enterprise IT

Among the respondents to the CA survey, 76% of global respondents believe their organizations will face a shortage of mainframe skills in the future, yet almost all respondents, 98%, felt their organizations were moderately or highly prepared to ensure the continuity of their mainframe workforce. In contrast, only 8% indicated having great difficulty finding qualified mainframe talent while 61% reported having some difficulty in doing so.

The Harris survey was conducted in September and October 2012. Its message is clear: Don’t be fooled by the national unemployment figures, currently hovering above 8%.  “In the technology space in particular, concerns over the ability to attract game-changing talent has become institutional and are keeping all levels of management awake at night,” notes Harris Allied Managing Director Kathy Harris.

The reason, as suggested in recent IBM studies, is that success with critical new technologies around big data, analytics, cloud computing, social business, virtualization, and mobile increasingly are giving top performing organizations their competitive advantage. The lingering recession, however, has taken its toll; unless your data center has been charged to proactively keep up, it probably is saddled with 5-year old skills at best; 10-year old skills more likely.

The Harris study picked up on this. When asking respondents the primary reason they thought people left their organization, 20% said people left for more exciting job opportunities or the chance to get their hands on some hot new technology.

Some companies recognize the problem and belatedly are trying to get back into the tech talent race. As Harris found when asking about what companies are doing to attract this kind of top talent 38% said they now were offering great opportunities for career growth. Others, 28%, were offering opportunities for professional development to recruit top tech pros. A fewer number, 24.5%, were offering competitive compensation packages while fewer still, 9%, offering competitive benefits packages.

To retain the top tech talent they already had 33.6% were offering opportunities for professional development, the single most important strategy they leveraged to retain employees. Others, 24.5%, offered opportunities for career advancement while 23.6% offered competitive salaries. Still a few hoped a telecommuting option or competitive bonuses would do the trick.

Clearly mainframe shops, like IT in general, are facing a transition as Linux, Java, SOA, cloud computing, analytics, big data, mobile, and social play increasing roles in the organization and the mainframe gains the capabilities to play in all these arenas. Advanced mainframe skills like CICS are great but it’s just a start. You also need Rest, Hadoop, and a slew of mobile, cloud, and data management skill sets.  At the same time, hybrid systems and expert integrated systems like IBM PureSystems and zEnterprise/zBX give shops the ability to tap a broader array of tech talent while baking in much of the expertise required.

, , , , , , , , , , , , ,

Leave a comment

Achieving the Private Cloud Business Payoff Fast

Nationwide Insurance eliminated both capital and operational expenditures through a private cloud and expects to save about $15 million over three years. In addition, it expects the more compact and efficient private cloud landscape to mean lower costs in the future.

The City of Honolulu turned to a private cloud and reduced application deployment time from one week to only hours. It also reduced the licensing cost of one database by 68%. Better still; a new property tax appraisal system resulted in $1.4 million of increased tax revenue in just three months.

The private cloud market, especially among larger enterprises, is strong and is expected to show a CAGR of 21.5% through 2015, according to research distributed by ReportLinker.com. Another report from Renub Research quotes analysts saving security is a big concern for enterprises that may be considering the use of public cloud. For such organizations, the private cloud represents an alternative with a tighter security model that would enable their IT managers to control the building, deployment and management of those privately owned, internal clouds.

Nationwide and Honolulu each built their private clouds on the IBM mainframe. From its introduction last August, IBM has aimed the zEC12 at cloud use cases, especially private clouds. The zEC12’s massive virtualization capabilities make it possible to handle private cloud environments consisting of thousands of distributed systems running Linux on zEC12.

One zEC12, notes IBM, can encompass the capacity of an entire multi-platform data center in a single system. The newest z also enables organizations to run conventional IT workloads and private cloud applications on one system.  Furthermore, if you are looking at a zEC12 coupled with the zBX (extension cabinet) you can have a multi-platform private cloud running Linux, Windows, and AIX workloads.  On a somewhat smaller scale, you can build a multi-platform private cloud using the IBM PureSystems machines.

Organizations everywhere are adopting private clouds.  The Open Data Center Alliance reports faster private cloud adoption than originally predicted. Over half its survey respondents will be running more than 40% of their IT operations in private clouds by 2015.

Mainframes make a particularly good private clouds choice.  Nationwide, the insurance company, initially planned to consolidate 3000 distributed servers to Linux virtual servers running on several z mainframes, creating a multi-platform private mainframe cloud optimized for its different workloads. The goal was to improve efficiency.

The key benefit: higher utilization and better economies of scale, effectively making the mainframes into a unified private cloud—a single set of resources, managed with the same tools but optimized for a variety of workloads. This eliminated both capital and operational expenditures and is expected to save about $15 million over three years. The more compact and efficient zEnterprise landscape also means low costs in the future too. Specifically, Nationwide is realizing an 80% reduction in power, cooling and floor space despite an application workload that is growing 30% annually, and practically all of it handled through the provisioning of new virtual servers on the existing mainframe footprint.

The City and County of Honolulu needed to increase government transparency by providing useful, timely data to its citizens. The goal was to boost citizen involvement, improve delivery of services, and increase the efficiency of city operations.

Honolulu built its cloud using an IFL engine running Linux on the city’s z10 EC machine. Between Linux and IBM z/VM the city created a customized cloud environment. This provided a scalable self-service platform on which city employees could develop open source applications, and it empowered the general public to create and deploy citizen-centric applications.

The results: reduction in application deployment time from one week to only hours and 68% lower licensing costs for one database. The resulting new property tax appraisal system increased tax revenue by $1.4 million in just three months.

You can do a similar multi-platform private cloud with IBM PureSystems. In either case the machines arrive ready for private cloud computing. Or else you can piece together x86 servers and components and do it yourself, which entails a lot more work, time, and risk.

, , , , , , , , , , , ,

Leave a comment

BMC Mainframe Survey Bolsters z-Hybird Computing

For the seventh year, BMC conducted a survey of mainframe shops worldwide. Clearly the mainframe not only isn’t dead but is growing in the shops where it is deployed.  Find a copy of the study here and a video explaining it here.

Distributed systems shops may be surprised by the results but not those familiar with the mainframe. Key results:

  • 90% of respondents consider the mainframe to be a long-term solution, and 50% expect it will attract new workloads.
  • Keeping IT costs down remains the top priority—not exactly shocking—as 69% report cost as a major focus, up from 60% from 2011.
  • 59% expect MIPS capacity to grow as they modernize and add applications to address expanding business needs.
  • More than 55% reported a need to integrate the mainframe into enterprise IT systems comprised of multiple mainframe and distributed platforms.

The last point suggests IBM is on the right track with hybrid computing. Hybrid computing is IBM’s term for extremely tightly integrated multi-platform computing managed from a single console (on the mainframe) as a single virtualized system. It promises significant operational efficiency over deploying and managing multiple platforms separately.

IBM also is on the right track in terms of keeping costs down.  One mainframe trick is to lower costs by enabling organizations to maximize the use of mainframe specialty engines in an effort to reduce consumption of costly GP MIPS.  Specialty engines are processors optimized for specific workloads, such as Java or Linux or databases. The specialty engine advantage continues with the newest zEC12, incorporating the same 20% price/performance boost, essentially more MIPS bang for the buck.

Two-thirds of the respondents were using at least one specialty engine. Of all respondents, 16% were using five or more engines, a few using dozens.  Not only do specialty engines deliver cheaper MIPS but they often are not considered in calculating software licensing charges, which lowers the cost even more.

About the only change noticeable in responses year-to-year is the jump in the respondent ranking of IT priorities. This year Business/IT alignment jumped from 7th to 4th. Priorities 1, 2, and 3 (Cost Reduction, Disaster Recovery, and Application Modernization respectively) remained the same.  Priorities 5 and 6 (Efficient Use of MIPS and Reduced Impact of Outages respectively) fell from a tie for 4th last year.

The greater emphasis on Business/IT alignment isn’t exactly new. Industry gurus have been harping on it for years.  Greater alignment between business and IT also suggests a strong need for hybrid computing, where varied business workloads can be mixed yet still be treated as a single system from the standpoint of efficiency management and operations. It also suggests IT needs to pay attention to business services management.

Actually, there was another surprise. Despite the mainframe’s reputation for rock solid availability and reliability, the survey noted that 39% of respondents reported unplanned outages. The primary causes for the outages were hardware failure (cited by 31% of respondents), system software failure (30%), in-house app failure (28%), and failed change process (22%). Of the respondents reporting outages, only 10% noted that the outage had significant impact. This was a new survey question this year so there is no comparison to previous years.

Respondents (59%) expect MIPS usage to continue to grow. Of that growth, 31% attribute it to increases in legacy and new apps while 9% attributed it to new apps. Legacy apps were cited by 19% of respondents.

In terms of modernizing apps, 46% of respondents planned to extend legacy code through SOA and web services while 43% wanted to increase the flexibility and agility of core apps.  Thirty-four percent of respondents hoped to reduce legacy app support costs through modernization.

Maybe the most interesting data point came where 60% of the respondents agreed that the mainframe needed to be a good IT citizen supporting varied workloads across the enterprise. That’s really what zEnterprise hybrid computing is about.

, , , , , , , , , ,

Leave a comment

Meet the Newest Mainframe—zEnterprise EC12

Last month IBM launched the zEnterprise EC12 (zEC12). As you would expect from the next release of the top-of-the-line mainframe, the zEC12 delivers faster speed and better price/performance. With a 5.5 GHz core processor, up from 5.2 GHz in its predecessor (z196) and an increase in the number of cores per chip (from 4 to 6) IBM calculates it delivers 50% more total capacity in the same footprint. The vEC12 won’t come cheap but on a cost per MIPS basis it’s probably the best value around.

More than just performance, it adds two major new capabilities, IBM zAware and Flash Express, and a slew of other hardware and software optimizations. The two new features, IBM zAware and Flash Express, both promise to be useful, but neither is a game changer. zAware is an analytics capability embedded in firmware. It is intended to monitor the entire zEnterprise system for the purpose of identifying problems before they impact operations.

Flash Express consists of a pair of memory cards installed in the zEC12; what amounts to a new tier of memory. Flash Express is designed to streamline memory paging when transitioning between workloads. It will moderate workload spikes and eliminate the need to page to disk, which should boost performance.

This machine is intended, initially, for shops with the most demanding workloads and no margin for error. The zEC12 also continues IBM’s hybrid computing thrust by including the zBX and new capabilities from System Director to be delivered through Unified Resource Manager APIs for better management of virtualized servers running on zBX blades.

This is a stunningly powerful machine, especially coming just 25 months after the z196 introduction. The zEC12 is intended for optimized corporate data serving. Its 101 configurable cores deliver a performance boost for all workloads. The zEC12 also comes with the usual array of assist processors, which are just configurable cores with the assist personality loaded on. Since they are zEC12 cores, they bring a 20% MIPS price/performance boost.

The directly competitive alternatives from the other (non-x86) server vendors are pretty slow by comparison. Oracle offers its top SPARC-based T4 server that features a 3.0 GHz processor. HP’s Integrity Superdome comes with the Itanium processor and tops out at 1.86 GHz. No performance rivals here, at least until each vendor refreshes its line.

For performance, IBM estimates up to a 45% improvement in Java workloads, up to a 27% improvement in CPU-intensive integer and floating point C/C++ applications, up to 30% improvement in throughput for DB2 for z/OS operational analytics, and more than 30% improvement in throughput for SAP workloads. IBM has, in effect, optimized the zEC12 from top to bottom of the stack. DB2 applications are certain to benefit as will WebSphere and SAP.

IBM characterizes zEC12 pricing as follows:

  • Hardware—20% MIPS price/performance improvement for standard engines and specialty engines , Flash Express runs $125,000 per pair of cards (3.2 TB)
  • Software—update pricing will provide 2%-7% MLC price/performance for flat-capacity upgrades from z196, and IFLs will maintain PVU rating of 120 for software  yet deliver more 20% MIPS
  • Maintenance—no less than 2% price performance improvement for standard MIPS and 20% on IFL MIPS

IBM is signaling price aggressiveness and flexibility to attract new shops to the mainframe and stimulate new workloads. The deeply discounted Solution Edition program will include the new machine. IBM also is offering financing with deferred payments through the end of the year in a coordinated effort to move these machines now.

As impressive as the zEC12 specifications and price/performance is BottomlineIT is most impressed by the speed at which IBM delivered the machine. It broke with its with its historic 3-year release cycle to deliver this potent hybrid machine 25 months after the z196 first introduced hybrid computing.

, , , , , , , ,

Leave a comment