Posts Tagged mainframe

Best TCO—System z vs. x86 vs. Public Cloud

IBM recently analyzed various likely customer workload scenarios and found that the System z as an enterprise Linux server could consistently beat x86 machines and even public cloud providers like AWS in terms of TCO.  The analysis was reasonably evenhanded although, like automobile mileage ratings, your actual results may vary.

This blogger has long contended that the z Enterprise Linux Server acquired under the deeply discounted IBM System z Solution Edition program could beat comparable x86 systems not only in terms of TCO but even TCA. Algar, a Brazilian telecom, acquired its initial zEnterprise Linux server to consolidate a slew of x86 systems and lay a foundation for scalable growth. It reports cutting data center costs by 70%. Nationwide Insurance, no newcomer to mainframe computing, used the zEnterprise to consolidate Linux servers, achieving $46 million in savings.

The point: the latest IBM TCO analyses confirm what IBM has been saying. TCO advantage, IBM found, switches to the z Enterprise Linux Server at around 200 virtual servers compared to the public cloud and a bit more VMs compared to x86 machines. View the IBM z TCO presentation here.

IBM further advanced its cause in the TCO/TCA battle with the recent introduction of the IBM Enterprise Cloud System. This is a factory-built and integrated system—processor, memory, network, IFLs, virtualization management, cloud management, hypervisor, disk orchestration, Linux OS—priced (discounted) as a single solution. IBM promises to deliver it in 45 days and have it production ready within hours of hitting the organization’s loading dock. Of course, it comes with the scalability, availability, security, manageability, etc. long associated with the z, and IBM reports it can scale to 6000 VMs. Not sure how this compares in price to a Solution Edition Enterprise Linux Server.

The IBM TCO analysis compared the public cloud, x86 cloud, and the Enterprise Cloud System in terms power and space, labor, software/middleware, and hardware costs when running 48 diverse (a range of low, medium, and high I/O) workloads. In general it found an advantage for the z Enterprise Cloud System of 34-73%.  The z cost considerably more in terms of hardware but it more than made up for it in terms of software, labor, and power. Overall, the TCO examined more than 30 cost variables, ranging from blade/IFL/memory/storage amounts to hypervisor/cloud management/middleware maintenance.

In terms of hardware, the z included the Enterprise Linux Server, storage, z/VM, and IBM Wave for z/VM. Software included WebSphere Application Server middleware, Cloud Management Suite for z, and Tivoli for z/VM. The x86 cloud included HP hardware with a hypervisor, WebSphere Application Server, SmartCloud Orchestrator, SmartCloud Monitoring, and Tivoli Storage Manager EE. Both analyses included labor to manage both hardware and VMs, power and space costs, and SUSE Linux.

The public cloud assumptions were a little different. Each workload was deployed as a separate instance. The pricing model was for reserved instances. Hardware costs were based on instances in east US region with SUSE, EBS volume, data in/out, support (enterprise), free and reserved tier discounts applied. Software costs included WebSphere Application Server ND (middleware) costs for instances. A labor cost was included for managing instance.

When IBM applied its analysis to 398 I/O-diverse workloads the results were similar, 49-75% lower cost with the Cloud System on z. Again, z hardware was considerably more costly than either x86 or the public cloud. But z software and labor was far less than the others. In terms of 3-year TCO, the cloud was the highest at $37 M, x86 came in at $18.3 M, and the Cloud on z cost $9.4 M. With 48 workloads, the z again came in with lowest TCO at $1 M compared to $1.6 M for x86 systems, and $3.9 M for the public cloud.

IBM tried to keep the assumptions equivalent across the platforms. If you make different software or middleware choices or a different mix of high-mid-low I/O workloads your results will be different but the rankings probably won’t change all that much.

Also, there still is time to register for IBM Edge2014 in Las Vegas. This blogger will be there hanging around the bloggers lounge when not attending sessions. Please join me there.

Follow Alan Radding/BottomlineIT on Twitter: @mainframeblog

Advertisements

, , , , , , , , , , , , , , , , , , , , , , , , ,

Leave a comment

Big Data and Analytics as Game Changing Technology

If you ever doubted that big data was going to become important, there should be no doubt anymore. Recent headlines from the past couple of weeks of the government capturing and analyzing massive amounts of daily phone call data should convince you.

That this report was shortly followed by more reports of the government tapping the big online data websites like Google, Yahoo, and such for even more data should alert you to three things:

1—There is a massive amount of data out there that can be collected and analyzed.

2—Companies are amassing incredible volumes of data in the normal course of serving people who readily and knowingly give their data to these organizations. (This blogger is one of those tens of million .)

3—The tools and capabilities are mature enough for someone to sort through that data and connect the dots to deliver meaningful insights.

Particularly with regard to the last point this blogger thought the industry was still five years away from generating meaningful results from that amount of data coming in at that velocity. Sure, marketers have been sorting and correlating large amounts of data for years, but it was mostly structured data and not at nearly this much. BTW, your blogger has been writing about big data for some time.

If the news reports weren’t enough it became clear at IBM Edge 2013, wrapping up is Las Vegas this week, that big data analytics is happening and companies and familiar companies are succeeding at it now. It also is clear that there is sufficient commercial off-the-shelf computing power from companies like IBM and others and analytics tools from a growing number of vendors to sort through massive amounts of data and make sense of it fast.

An interesting point came up in one of the many discussions at Edge 2013 touching on big data. Every person’s data footprint is as unique as a fingerprint or other bio-metrics. We all visit different websites and interact with social media and use our credit and debit cards in highly individual ways. Again, marketers have sensed this at some level for years, but they haven’t yet really honed it down to the actual individual on a mass scale, although there is no technical reason one couldn’t. You now can, in effect, market to a demographic of one.

A related conference is coming up Oct. 21-25 in Orlando, Fl., called Enterprise Systems 2013.  It will combine the System z and the Power System Technical University along with a new executive-focused Enterprise Systems event. It will include new announcements, peeks into trends and directions, over 500 expert technical sessions across 10 tracks, and a comprehensive solution center. This blogger has already put it on his calendar.

There was much more interesting information at Edge 2013, such as using data analytics and cognitive computing to protect IT systems.  Perimeter defense, anti-virus, and ID management are no longer sufficient. Stay tuned.

, , , , , , , , , , , ,

Leave a comment

Lessons from IBM Eagle TCO Analyses

A company running an obsolete z890 mainframe with what amounted to 0.88 processors (332 MIPS) planned a migration to a distributed system consisted of 36 distributed UNIX servers. The production workload consisted of applications, database, testing, development, security, and more.  Five years later, the company was running the same in the 36-server, multi-core (41x more cores than the z890) distributed environment except that its 4-year TCO went from $4.9 million to $17.9 million based on an IBM Eagle study.  The lesson, the Eagle team notes: cores drive platform costs in distributed systems.

Then there is the case of a 3500 MIPS mainframe shop that budgeted $10 million for a 1-year migration to a distributed environment. Eighteen months into the project, now six months behind schedule, the company had spent $25 million and only managed to offload 350 MIPS. In addition, it had to increase staff to cover the  over-run, implement steps to replace mainframe automation, had to acquire additional distributed capacity over the initial prediction (to support only 10% of total MIPS offloaded), and had to extend the period of running the old and the new systems in parallel at even more cost due to the schedule overrun. Not surprisingly, the executive sponsor is gone.

If the goal of a migration to the distributed environment is cost savings, the IBM Eagle team has concluded after 3 years of doing such analyses, most migrations are a failure. Read the Eagle FAQ here.

The Eagle TCO team was formed in 2007 and since then reports completing over 300 user studies.  Often its studies are used to determine the best platform among IBM’s various choices for a given set of workloads, usually as part of a Fit for Purpose. In other cases, the Eagle analysis is aimed at enabling a System z shop to avoid a migration to a distributed platform. The Eagle team, in fact, is platform agnostic until it completes its quantitative analysis, when the resulting numbers generally make the decisions clear.

Along the way, the Eagle team has learned a few lessons.  For example:  re-hosting projects tend to be larger than anticipated. The typical one-year projection will likely turn into a two- or three-year project.

The Eagle team also offers the following tips, which can help existing shops that aren’t necessarily looking to migrate but just want to minimize costs:

  • Update hardware and software; new systems generally are more cost-efficient. For example one bank upgraded from z/OS 1.6 to 1.8 and reduced each LPAR’s MIPS by 5% (monthly software cost savings paid for the upgrade almost immediately)
  • Schedule workloads to take advantage of sub-capacity software pricing for platforms that offer it, which may produce free workloads
  • Consolidate workloads on Linux, which invariably saves money, especially when consolidating many Linux virtual servers on a mainframe IFL. (A recent debate raged on LinkedIn focused on how many virtual instances can run on an IFL with some suggesting a max of 20. The official IBM figure:  you can consolidate up to 60 distributed cores or more on a single System z core; a single System z core = an IFL.)
  • Changing the database can impact capacity requirements and therefore costs, resulting in lower hardware and software costs
  • Consider the  IBM mainframe Solution Edition program, which is the best mainframe deal going, enabling you to acquire a new mainframe for workloads you’ve never run on a mainframe for a deeply discounted package price including hardware, software, middleware, and 3 years of maintenance.

 BottomlineIT generally is skeptical of TCO analyses from vendors. To be useful the analysis needs to include full context, technical details (components, release levels, and prices), and specific quantified benchmark results.  In addition, there are soft costs that must be considered.  Eagle analyses generally do that.

In the end, the lowest acquisition cost or even the lowest TCO isn’t necessarily the best platform choice for a given situation or workload. Determining the right platform requires both quantifiable analysis and judgment.

, , , , , , , , , ,

Leave a comment

Achieving the Private Cloud Business Payoff Fast

Nationwide Insurance eliminated both capital and operational expenditures through a private cloud and expects to save about $15 million over three years. In addition, it expects the more compact and efficient private cloud landscape to mean lower costs in the future.

The City of Honolulu turned to a private cloud and reduced application deployment time from one week to only hours. It also reduced the licensing cost of one database by 68%. Better still; a new property tax appraisal system resulted in $1.4 million of increased tax revenue in just three months.

The private cloud market, especially among larger enterprises, is strong and is expected to show a CAGR of 21.5% through 2015, according to research distributed by ReportLinker.com. Another report from Renub Research quotes analysts saving security is a big concern for enterprises that may be considering the use of public cloud. For such organizations, the private cloud represents an alternative with a tighter security model that would enable their IT managers to control the building, deployment and management of those privately owned, internal clouds.

Nationwide and Honolulu each built their private clouds on the IBM mainframe. From its introduction last August, IBM has aimed the zEC12 at cloud use cases, especially private clouds. The zEC12’s massive virtualization capabilities make it possible to handle private cloud environments consisting of thousands of distributed systems running Linux on zEC12.

One zEC12, notes IBM, can encompass the capacity of an entire multi-platform data center in a single system. The newest z also enables organizations to run conventional IT workloads and private cloud applications on one system.  Furthermore, if you are looking at a zEC12 coupled with the zBX (extension cabinet) you can have a multi-platform private cloud running Linux, Windows, and AIX workloads.  On a somewhat smaller scale, you can build a multi-platform private cloud using the IBM PureSystems machines.

Organizations everywhere are adopting private clouds.  The Open Data Center Alliance reports faster private cloud adoption than originally predicted. Over half its survey respondents will be running more than 40% of their IT operations in private clouds by 2015.

Mainframes make a particularly good private clouds choice.  Nationwide, the insurance company, initially planned to consolidate 3000 distributed servers to Linux virtual servers running on several z mainframes, creating a multi-platform private mainframe cloud optimized for its different workloads. The goal was to improve efficiency.

The key benefit: higher utilization and better economies of scale, effectively making the mainframes into a unified private cloud—a single set of resources, managed with the same tools but optimized for a variety of workloads. This eliminated both capital and operational expenditures and is expected to save about $15 million over three years. The more compact and efficient zEnterprise landscape also means low costs in the future too. Specifically, Nationwide is realizing an 80% reduction in power, cooling and floor space despite an application workload that is growing 30% annually, and practically all of it handled through the provisioning of new virtual servers on the existing mainframe footprint.

The City and County of Honolulu needed to increase government transparency by providing useful, timely data to its citizens. The goal was to boost citizen involvement, improve delivery of services, and increase the efficiency of city operations.

Honolulu built its cloud using an IFL engine running Linux on the city’s z10 EC machine. Between Linux and IBM z/VM the city created a customized cloud environment. This provided a scalable self-service platform on which city employees could develop open source applications, and it empowered the general public to create and deploy citizen-centric applications.

The results: reduction in application deployment time from one week to only hours and 68% lower licensing costs for one database. The resulting new property tax appraisal system increased tax revenue by $1.4 million in just three months.

You can do a similar multi-platform private cloud with IBM PureSystems. In either case the machines arrive ready for private cloud computing. Or else you can piece together x86 servers and components and do it yourself, which entails a lot more work, time, and risk.

, , , , , , , , , , , ,

Leave a comment

BMC Mainframe Survey Bolsters z-Hybird Computing

For the seventh year, BMC conducted a survey of mainframe shops worldwide. Clearly the mainframe not only isn’t dead but is growing in the shops where it is deployed.  Find a copy of the study here and a video explaining it here.

Distributed systems shops may be surprised by the results but not those familiar with the mainframe. Key results:

  • 90% of respondents consider the mainframe to be a long-term solution, and 50% expect it will attract new workloads.
  • Keeping IT costs down remains the top priority—not exactly shocking—as 69% report cost as a major focus, up from 60% from 2011.
  • 59% expect MIPS capacity to grow as they modernize and add applications to address expanding business needs.
  • More than 55% reported a need to integrate the mainframe into enterprise IT systems comprised of multiple mainframe and distributed platforms.

The last point suggests IBM is on the right track with hybrid computing. Hybrid computing is IBM’s term for extremely tightly integrated multi-platform computing managed from a single console (on the mainframe) as a single virtualized system. It promises significant operational efficiency over deploying and managing multiple platforms separately.

IBM also is on the right track in terms of keeping costs down.  One mainframe trick is to lower costs by enabling organizations to maximize the use of mainframe specialty engines in an effort to reduce consumption of costly GP MIPS.  Specialty engines are processors optimized for specific workloads, such as Java or Linux or databases. The specialty engine advantage continues with the newest zEC12, incorporating the same 20% price/performance boost, essentially more MIPS bang for the buck.

Two-thirds of the respondents were using at least one specialty engine. Of all respondents, 16% were using five or more engines, a few using dozens.  Not only do specialty engines deliver cheaper MIPS but they often are not considered in calculating software licensing charges, which lowers the cost even more.

About the only change noticeable in responses year-to-year is the jump in the respondent ranking of IT priorities. This year Business/IT alignment jumped from 7th to 4th. Priorities 1, 2, and 3 (Cost Reduction, Disaster Recovery, and Application Modernization respectively) remained the same.  Priorities 5 and 6 (Efficient Use of MIPS and Reduced Impact of Outages respectively) fell from a tie for 4th last year.

The greater emphasis on Business/IT alignment isn’t exactly new. Industry gurus have been harping on it for years.  Greater alignment between business and IT also suggests a strong need for hybrid computing, where varied business workloads can be mixed yet still be treated as a single system from the standpoint of efficiency management and operations. It also suggests IT needs to pay attention to business services management.

Actually, there was another surprise. Despite the mainframe’s reputation for rock solid availability and reliability, the survey noted that 39% of respondents reported unplanned outages. The primary causes for the outages were hardware failure (cited by 31% of respondents), system software failure (30%), in-house app failure (28%), and failed change process (22%). Of the respondents reporting outages, only 10% noted that the outage had significant impact. This was a new survey question this year so there is no comparison to previous years.

Respondents (59%) expect MIPS usage to continue to grow. Of that growth, 31% attribute it to increases in legacy and new apps while 9% attributed it to new apps. Legacy apps were cited by 19% of respondents.

In terms of modernizing apps, 46% of respondents planned to extend legacy code through SOA and web services while 43% wanted to increase the flexibility and agility of core apps.  Thirty-four percent of respondents hoped to reduce legacy app support costs through modernization.

Maybe the most interesting data point came where 60% of the respondents agreed that the mainframe needed to be a good IT citizen supporting varied workloads across the enterprise. That’s really what zEnterprise hybrid computing is about.

, , , , , , , , , ,

Leave a comment

Mainframe Declared Dead Again

Since the early 1990s IT pundits have been declaring the mainframe dead. With almost every IT advance: x86 virtualization, multi-core x86 processors, cloud computing some IT analyst announces the end of the mainframe. Those declarations have slowed since IBM introduced the hybrid zEnterprise 196 in July 2010 and the mainframe experienced a series of impressive quarterly sales gains.

That’s what makes this latest mainframe obituary so surprising. A white paper from Micro Focus reports that “CIOs are increasingly questioning whether the mainframe will continue to be a strategic platform in the future. Written by Standish Group and based on its CIO survey that found 70% of respondents said the mainframe provides a central, strategic role currently. However, none of the CIOs consider the mainframe as a strategic platform in 5-10 years time.

None? Zero? Nada? That’s pretty astonishing. So, what are the CIOs’ complaints? The study isn’t exactly specific, but it seems to do with the cloud.

As Standish puts it: looming large on the CIO agenda is cloud. Cloud creates both challenge and opportunity for CIOs today. The opportunity lies in driving towards more flexible, cost-effective service provision for the business, enabling in-house IT resources to focus on much more strategic initiatives. At the same time, CIOs are managing a host of current technologies and applications, some applications duplicating others, some legacy applications for which there appears no easy modernization solution and entrenched solutions and applications that provide no clear journey to cloud-based services.

Whoa, let’s parse that sentence. As Standish sees it, CIOs will look to the cloud for flexibility and cost-effective service provisioning that frees IT to focus strategically. Based on that, you could just as easily build the case for the zEnterprise, starting with the entry z114 and the Unified Resource Manager.

But that’s not really the issue; IT modernization is.  The researchers note that the need to address legacy mainframe applications effectively is a critical success factor. Furthermore Standish observes: these applications are still used in the organization today [which] emphasizes their business importance, and often there is a high level of intellectual capital embedded within these systems.

OK, so the real complaint is around leveraging legacy applications as valuable software assets. Fortunately CIOs can do this without undertaking a rip-and-replace of the mainframe. SOA is one place to start. SOA provides way to extract business logic from mainframe apps and use it as services. The mainframe does SOA very well. Independent Assessment, the publisher of BottomlineIT, has written a number of case studies on mainframe SOA. Check it out here and here.

Standish digs up a few other complaints about the mainframe, such as the shortage of mainframe skills and the high cost of mainframe computing. These are old complaints and much is being done to address them. With the z114 and the System z Solution Edition Programs IBM even is putting a dent in the cost-of-acquisition issue.

Then the paper offers this intriguing complaint: Being forced into a decision to move from unsupported mainframe environments to continue operations and meet new performance levels. Huh? If you are seeking to meet a variety of new operational and performance levels while efficiently managing and supporting it all the hybrid zEnterprise seems made to order with z/OS, z/VM, Linux on z, specialty engines, AIX on Power blades, and soon x86 on a blade. Standish seems oblivious to all the changes the mainframe has undergone since the introduction of the zEnterprise over a year ago.

This, however, is a Micro Focus paper so Standish isn’t interested in looking at how mainframe shops can leverage what IBM has been building into the mainframe and zEnterprise over the last few years. Yet, to position themselves for cloud computing, private clouds, and to meet the CIOs’ reported  three 2014 top objectives—1) increasing enterprise growth, 2) improving operations, and 3) attracting and retaining new customers—the zEnterprise is exactly what they should be looking at. Instead, Standish recommends rehosting and migrating applications, and how best to do that is with, of course, Micro Focus.

Not every organization or workload should have a mainframe. Many don’t. Similarly, there are situations that can be best dealt with by migrating mainframe applications to a different platform, but cloud computing probably is not one of those because the mainframe can play very well in the cloud. It would have been nice if Standish had focused on those workloads and situations that make sense to rehost and migrate while at least acknowledging the new hybrid mainframe world.

Please note: DancingDinosaur will be unavailable next week and not able to moderate comments until 9/17.

, , , , , , ,

1 Comment

Time to Rethink Disaster Recovery

Disaster recovery (DR) has been challenging from the start, and it certainly isn’t getting any easier. Backup to disk has simplified some aspects of DR while virtualization helps in some ways and complicates it in others.

Large systems running mission critical workloads present a particularly difficult and costly DR challenge. Companies needing to meet very short (measured in seconds) RPO and RTO requirements typically have had to invest in pairs of systems set up as synchronized mirrors with synchronous replication. It works but it is costly and synchronous replication presents distance constraints.

For mainframes, the Geographically Dispersed Parallel Sysplex (GDPS) has been IBM’s primary DR vehicle. A recent IBM announcement expanded on the GDPS options primarily by adding remote asynchronous replication to greatly extend the distance between the paired systems.

DR at this level revolves around system clustering technology. You set up two systems, one as a mirror of the other, and update the data synchronously or asynchronously. When the primary system fails, you bring up the other and resume working as before. How you define your RPO and RTO determines how quickly you can resume operations following a failure and with how much data lag or loss.

Until now synchronous replication let you hit your tightest RPO and RTO. Synchronous replication, however, entails distance constraints that make it inappropriate for many organizations. It’s also quite expensive.

Asynchronous replication, however, is not bound by synchronous distance constraints. IBM offers GDPS/XRC and GDPS/GM, based upon asynchronous disk replication with unlimited distance. The current GDPS async replication products, however, require the failed site’s workload to be restarted at the recovery site, which typically will take 30-60 min. This will not satisfy organizations that require an RTO of seconds.

In its latest announcement IBM presents GDPS active/active continuous availability as the next generation of GDPS. This represents a shift from the failover model, from a situation where systems go down and can be brought online at the failover site in a few hours, to a near continuous availability model, where the system can be brought back online in an hour or less. IBM describes the latest enhancements as combining the best attributes of the existing suite of GDPS services and expands them to allow unlimited distances between your data center sites with the RTO measured in minutes. With its new GDPS offerings, IBM promises to achieve near continuous availability, meaning it can meet an RTO of tens of seconds.

Non-mainframe shops generally follow similar DR strategies using mirrored pairs of servers, monitoring and sensing software to detect a system failure, and switchover software. To hit the tightest RTO, you will set up your cluster as an active/active pair.

Of course, not every organization needs fast RTO. In that case, it can dispense with mirror systems altogether and rely on traditional tape backup and recovery to a standby site.

The concern with RTO usually focuses on the organization’s primary transaction production systems. But with the cloud organizations might begin to rethink what they deem mission-critical and how it should be backed up. Maybe they don’t have to think about mirrored system clusters at all. Maybe the mission critical systems to be protected aren’t even production transaction systems.

, , , , , , , , , ,

Leave a comment