Posts Tagged System z

Best TCO—System z vs. x86 vs. Public Cloud

IBM recently analyzed various likely customer workload scenarios and found that the System z as an enterprise Linux server could consistently beat x86 machines and even public cloud providers like AWS in terms of TCO.  The analysis was reasonably evenhanded although, like automobile mileage ratings, your actual results may vary.

This blogger has long contended that the z Enterprise Linux Server acquired under the deeply discounted IBM System z Solution Edition program could beat comparable x86 systems not only in terms of TCO but even TCA. Algar, a Brazilian telecom, acquired its initial zEnterprise Linux server to consolidate a slew of x86 systems and lay a foundation for scalable growth. It reports cutting data center costs by 70%. Nationwide Insurance, no newcomer to mainframe computing, used the zEnterprise to consolidate Linux servers, achieving $46 million in savings.

The point: the latest IBM TCO analyses confirm what IBM has been saying. TCO advantage, IBM found, switches to the z Enterprise Linux Server at around 200 virtual servers compared to the public cloud and a bit more VMs compared to x86 machines. View the IBM z TCO presentation here.

IBM further advanced its cause in the TCO/TCA battle with the recent introduction of the IBM Enterprise Cloud System. This is a factory-built and integrated system—processor, memory, network, IFLs, virtualization management, cloud management, hypervisor, disk orchestration, Linux OS—priced (discounted) as a single solution. IBM promises to deliver it in 45 days and have it production ready within hours of hitting the organization’s loading dock. Of course, it comes with the scalability, availability, security, manageability, etc. long associated with the z, and IBM reports it can scale to 6000 VMs. Not sure how this compares in price to a Solution Edition Enterprise Linux Server.

The IBM TCO analysis compared the public cloud, x86 cloud, and the Enterprise Cloud System in terms power and space, labor, software/middleware, and hardware costs when running 48 diverse (a range of low, medium, and high I/O) workloads. In general it found an advantage for the z Enterprise Cloud System of 34-73%.  The z cost considerably more in terms of hardware but it more than made up for it in terms of software, labor, and power. Overall, the TCO examined more than 30 cost variables, ranging from blade/IFL/memory/storage amounts to hypervisor/cloud management/middleware maintenance.

In terms of hardware, the z included the Enterprise Linux Server, storage, z/VM, and IBM Wave for z/VM. Software included WebSphere Application Server middleware, Cloud Management Suite for z, and Tivoli for z/VM. The x86 cloud included HP hardware with a hypervisor, WebSphere Application Server, SmartCloud Orchestrator, SmartCloud Monitoring, and Tivoli Storage Manager EE. Both analyses included labor to manage both hardware and VMs, power and space costs, and SUSE Linux.

The public cloud assumptions were a little different. Each workload was deployed as a separate instance. The pricing model was for reserved instances. Hardware costs were based on instances in east US region with SUSE, EBS volume, data in/out, support (enterprise), free and reserved tier discounts applied. Software costs included WebSphere Application Server ND (middleware) costs for instances. A labor cost was included for managing instance.

When IBM applied its analysis to 398 I/O-diverse workloads the results were similar, 49-75% lower cost with the Cloud System on z. Again, z hardware was considerably more costly than either x86 or the public cloud. But z software and labor was far less than the others. In terms of 3-year TCO, the cloud was the highest at $37 M, x86 came in at $18.3 M, and the Cloud on z cost $9.4 M. With 48 workloads, the z again came in with lowest TCO at $1 M compared to $1.6 M for x86 systems, and $3.9 M for the public cloud.

IBM tried to keep the assumptions equivalent across the platforms. If you make different software or middleware choices or a different mix of high-mid-low I/O workloads your results will be different but the rankings probably won’t change all that much.

Also, there still is time to register for IBM Edge2014 in Las Vegas. This blogger will be there hanging around the bloggers lounge when not attending sessions. Please join me there.

Follow Alan Radding/BottomlineIT on Twitter: @mainframeblog

Advertisements

, , , , , , , , , , , , , , , , , , , , , , , , ,

Leave a comment

Big Data and Analytics as Game Changing Technology

If you ever doubted that big data was going to become important, there should be no doubt anymore. Recent headlines from the past couple of weeks of the government capturing and analyzing massive amounts of daily phone call data should convince you.

That this report was shortly followed by more reports of the government tapping the big online data websites like Google, Yahoo, and such for even more data should alert you to three things:

1—There is a massive amount of data out there that can be collected and analyzed.

2—Companies are amassing incredible volumes of data in the normal course of serving people who readily and knowingly give their data to these organizations. (This blogger is one of those tens of million .)

3—The tools and capabilities are mature enough for someone to sort through that data and connect the dots to deliver meaningful insights.

Particularly with regard to the last point this blogger thought the industry was still five years away from generating meaningful results from that amount of data coming in at that velocity. Sure, marketers have been sorting and correlating large amounts of data for years, but it was mostly structured data and not at nearly this much. BTW, your blogger has been writing about big data for some time.

If the news reports weren’t enough it became clear at IBM Edge 2013, wrapping up is Las Vegas this week, that big data analytics is happening and companies and familiar companies are succeeding at it now. It also is clear that there is sufficient commercial off-the-shelf computing power from companies like IBM and others and analytics tools from a growing number of vendors to sort through massive amounts of data and make sense of it fast.

An interesting point came up in one of the many discussions at Edge 2013 touching on big data. Every person’s data footprint is as unique as a fingerprint or other bio-metrics. We all visit different websites and interact with social media and use our credit and debit cards in highly individual ways. Again, marketers have sensed this at some level for years, but they haven’t yet really honed it down to the actual individual on a mass scale, although there is no technical reason one couldn’t. You now can, in effect, market to a demographic of one.

A related conference is coming up Oct. 21-25 in Orlando, Fl., called Enterprise Systems 2013.  It will combine the System z and the Power System Technical University along with a new executive-focused Enterprise Systems event. It will include new announcements, peeks into trends and directions, over 500 expert technical sessions across 10 tracks, and a comprehensive solution center. This blogger has already put it on his calendar.

There was much more interesting information at Edge 2013, such as using data analytics and cognitive computing to protect IT systems.  Perimeter defense, anti-virus, and ID management are no longer sufficient. Stay tuned.

, , , , , , , , , , , ,

Leave a comment

Lessons from IBM Eagle TCO Analyses

A company running an obsolete z890 mainframe with what amounted to 0.88 processors (332 MIPS) planned a migration to a distributed system consisted of 36 distributed UNIX servers. The production workload consisted of applications, database, testing, development, security, and more.  Five years later, the company was running the same in the 36-server, multi-core (41x more cores than the z890) distributed environment except that its 4-year TCO went from $4.9 million to $17.9 million based on an IBM Eagle study.  The lesson, the Eagle team notes: cores drive platform costs in distributed systems.

Then there is the case of a 3500 MIPS mainframe shop that budgeted $10 million for a 1-year migration to a distributed environment. Eighteen months into the project, now six months behind schedule, the company had spent $25 million and only managed to offload 350 MIPS. In addition, it had to increase staff to cover the  over-run, implement steps to replace mainframe automation, had to acquire additional distributed capacity over the initial prediction (to support only 10% of total MIPS offloaded), and had to extend the period of running the old and the new systems in parallel at even more cost due to the schedule overrun. Not surprisingly, the executive sponsor is gone.

If the goal of a migration to the distributed environment is cost savings, the IBM Eagle team has concluded after 3 years of doing such analyses, most migrations are a failure. Read the Eagle FAQ here.

The Eagle TCO team was formed in 2007 and since then reports completing over 300 user studies.  Often its studies are used to determine the best platform among IBM’s various choices for a given set of workloads, usually as part of a Fit for Purpose. In other cases, the Eagle analysis is aimed at enabling a System z shop to avoid a migration to a distributed platform. The Eagle team, in fact, is platform agnostic until it completes its quantitative analysis, when the resulting numbers generally make the decisions clear.

Along the way, the Eagle team has learned a few lessons.  For example:  re-hosting projects tend to be larger than anticipated. The typical one-year projection will likely turn into a two- or three-year project.

The Eagle team also offers the following tips, which can help existing shops that aren’t necessarily looking to migrate but just want to minimize costs:

  • Update hardware and software; new systems generally are more cost-efficient. For example one bank upgraded from z/OS 1.6 to 1.8 and reduced each LPAR’s MIPS by 5% (monthly software cost savings paid for the upgrade almost immediately)
  • Schedule workloads to take advantage of sub-capacity software pricing for platforms that offer it, which may produce free workloads
  • Consolidate workloads on Linux, which invariably saves money, especially when consolidating many Linux virtual servers on a mainframe IFL. (A recent debate raged on LinkedIn focused on how many virtual instances can run on an IFL with some suggesting a max of 20. The official IBM figure:  you can consolidate up to 60 distributed cores or more on a single System z core; a single System z core = an IFL.)
  • Changing the database can impact capacity requirements and therefore costs, resulting in lower hardware and software costs
  • Consider the  IBM mainframe Solution Edition program, which is the best mainframe deal going, enabling you to acquire a new mainframe for workloads you’ve never run on a mainframe for a deeply discounted package price including hardware, software, middleware, and 3 years of maintenance.

 BottomlineIT generally is skeptical of TCO analyses from vendors. To be useful the analysis needs to include full context, technical details (components, release levels, and prices), and specific quantified benchmark results.  In addition, there are soft costs that must be considered.  Eagle analyses generally do that.

In the end, the lowest acquisition cost or even the lowest TCO isn’t necessarily the best platform choice for a given situation or workload. Determining the right platform requires both quantifiable analysis and judgment.

, , , , , , , , , ,

Leave a comment

Achieving the Private Cloud Business Payoff Fast

Nationwide Insurance eliminated both capital and operational expenditures through a private cloud and expects to save about $15 million over three years. In addition, it expects the more compact and efficient private cloud landscape to mean lower costs in the future.

The City of Honolulu turned to a private cloud and reduced application deployment time from one week to only hours. It also reduced the licensing cost of one database by 68%. Better still; a new property tax appraisal system resulted in $1.4 million of increased tax revenue in just three months.

The private cloud market, especially among larger enterprises, is strong and is expected to show a CAGR of 21.5% through 2015, according to research distributed by ReportLinker.com. Another report from Renub Research quotes analysts saving security is a big concern for enterprises that may be considering the use of public cloud. For such organizations, the private cloud represents an alternative with a tighter security model that would enable their IT managers to control the building, deployment and management of those privately owned, internal clouds.

Nationwide and Honolulu each built their private clouds on the IBM mainframe. From its introduction last August, IBM has aimed the zEC12 at cloud use cases, especially private clouds. The zEC12’s massive virtualization capabilities make it possible to handle private cloud environments consisting of thousands of distributed systems running Linux on zEC12.

One zEC12, notes IBM, can encompass the capacity of an entire multi-platform data center in a single system. The newest z also enables organizations to run conventional IT workloads and private cloud applications on one system.  Furthermore, if you are looking at a zEC12 coupled with the zBX (extension cabinet) you can have a multi-platform private cloud running Linux, Windows, and AIX workloads.  On a somewhat smaller scale, you can build a multi-platform private cloud using the IBM PureSystems machines.

Organizations everywhere are adopting private clouds.  The Open Data Center Alliance reports faster private cloud adoption than originally predicted. Over half its survey respondents will be running more than 40% of their IT operations in private clouds by 2015.

Mainframes make a particularly good private clouds choice.  Nationwide, the insurance company, initially planned to consolidate 3000 distributed servers to Linux virtual servers running on several z mainframes, creating a multi-platform private mainframe cloud optimized for its different workloads. The goal was to improve efficiency.

The key benefit: higher utilization and better economies of scale, effectively making the mainframes into a unified private cloud—a single set of resources, managed with the same tools but optimized for a variety of workloads. This eliminated both capital and operational expenditures and is expected to save about $15 million over three years. The more compact and efficient zEnterprise landscape also means low costs in the future too. Specifically, Nationwide is realizing an 80% reduction in power, cooling and floor space despite an application workload that is growing 30% annually, and practically all of it handled through the provisioning of new virtual servers on the existing mainframe footprint.

The City and County of Honolulu needed to increase government transparency by providing useful, timely data to its citizens. The goal was to boost citizen involvement, improve delivery of services, and increase the efficiency of city operations.

Honolulu built its cloud using an IFL engine running Linux on the city’s z10 EC machine. Between Linux and IBM z/VM the city created a customized cloud environment. This provided a scalable self-service platform on which city employees could develop open source applications, and it empowered the general public to create and deploy citizen-centric applications.

The results: reduction in application deployment time from one week to only hours and 68% lower licensing costs for one database. The resulting new property tax appraisal system increased tax revenue by $1.4 million in just three months.

You can do a similar multi-platform private cloud with IBM PureSystems. In either case the machines arrive ready for private cloud computing. Or else you can piece together x86 servers and components and do it yourself, which entails a lot more work, time, and risk.

, , , , , , , , , , , ,

Leave a comment

Mainframe Declared Dead Again

Since the early 1990s IT pundits have been declaring the mainframe dead. With almost every IT advance: x86 virtualization, multi-core x86 processors, cloud computing some IT analyst announces the end of the mainframe. Those declarations have slowed since IBM introduced the hybrid zEnterprise 196 in July 2010 and the mainframe experienced a series of impressive quarterly sales gains.

That’s what makes this latest mainframe obituary so surprising. A white paper from Micro Focus reports that “CIOs are increasingly questioning whether the mainframe will continue to be a strategic platform in the future. Written by Standish Group and based on its CIO survey that found 70% of respondents said the mainframe provides a central, strategic role currently. However, none of the CIOs consider the mainframe as a strategic platform in 5-10 years time.

None? Zero? Nada? That’s pretty astonishing. So, what are the CIOs’ complaints? The study isn’t exactly specific, but it seems to do with the cloud.

As Standish puts it: looming large on the CIO agenda is cloud. Cloud creates both challenge and opportunity for CIOs today. The opportunity lies in driving towards more flexible, cost-effective service provision for the business, enabling in-house IT resources to focus on much more strategic initiatives. At the same time, CIOs are managing a host of current technologies and applications, some applications duplicating others, some legacy applications for which there appears no easy modernization solution and entrenched solutions and applications that provide no clear journey to cloud-based services.

Whoa, let’s parse that sentence. As Standish sees it, CIOs will look to the cloud for flexibility and cost-effective service provisioning that frees IT to focus strategically. Based on that, you could just as easily build the case for the zEnterprise, starting with the entry z114 and the Unified Resource Manager.

But that’s not really the issue; IT modernization is.  The researchers note that the need to address legacy mainframe applications effectively is a critical success factor. Furthermore Standish observes: these applications are still used in the organization today [which] emphasizes their business importance, and often there is a high level of intellectual capital embedded within these systems.

OK, so the real complaint is around leveraging legacy applications as valuable software assets. Fortunately CIOs can do this without undertaking a rip-and-replace of the mainframe. SOA is one place to start. SOA provides way to extract business logic from mainframe apps and use it as services. The mainframe does SOA very well. Independent Assessment, the publisher of BottomlineIT, has written a number of case studies on mainframe SOA. Check it out here and here.

Standish digs up a few other complaints about the mainframe, such as the shortage of mainframe skills and the high cost of mainframe computing. These are old complaints and much is being done to address them. With the z114 and the System z Solution Edition Programs IBM even is putting a dent in the cost-of-acquisition issue.

Then the paper offers this intriguing complaint: Being forced into a decision to move from unsupported mainframe environments to continue operations and meet new performance levels. Huh? If you are seeking to meet a variety of new operational and performance levels while efficiently managing and supporting it all the hybrid zEnterprise seems made to order with z/OS, z/VM, Linux on z, specialty engines, AIX on Power blades, and soon x86 on a blade. Standish seems oblivious to all the changes the mainframe has undergone since the introduction of the zEnterprise over a year ago.

This, however, is a Micro Focus paper so Standish isn’t interested in looking at how mainframe shops can leverage what IBM has been building into the mainframe and zEnterprise over the last few years. Yet, to position themselves for cloud computing, private clouds, and to meet the CIOs’ reported  three 2014 top objectives—1) increasing enterprise growth, 2) improving operations, and 3) attracting and retaining new customers—the zEnterprise is exactly what they should be looking at. Instead, Standish recommends rehosting and migrating applications, and how best to do that is with, of course, Micro Focus.

Not every organization or workload should have a mainframe. Many don’t. Similarly, there are situations that can be best dealt with by migrating mainframe applications to a different platform, but cloud computing probably is not one of those because the mainframe can play very well in the cloud. It would have been nice if Standish had focused on those workloads and situations that make sense to rehost and migrate while at least acknowledging the new hybrid mainframe world.

Please note: DancingDinosaur will be unavailable next week and not able to moderate comments until 9/17.

, , , , , , ,

1 Comment

New IBM z114—a Midrange Mainframe

IBM introduced its newest mainframe in the zEnterprise family, the z114, a business class rather than enterprise class machine. With the z114IBM can now deliver a more compelling total cost of acquisition (TCA) case, giving midrange enterprises another option as they consolidate, virtualize, and migrate their sprawling server farms. This will be particularly interesting to shops running HP Itanium or Oracle/Sun servers.

The z114 comes with a $75,000 entry price. At this price, it can begin to compete with commodity high end servers on a TCA basis, especially if it is bundled with discount programs likeIBM’s System z Solution Editions and unpublicized offers from IBM Global Finance (IGF). There should be no doubt, IBM is willing to deal to win midrange workloads from other platforms.

First, the specs, speeds, and feeds:  the z114 is available in two models; a single-drawer model, the M05, and a two-drawer model, the M10, which offers additional capacity for I/O and coupling expansion and/or more specialty engines. It comes with up to 10 configurable cores, which can be designated as general purpose or specialty engine (zIIP, zAAP, IFL, ICF) or used as spares. The M10 also allows two dedicated spares as well, a first for a midrange mainframe.

The z114 uses a superscalar design that runs at 3.8 GHz, an improved cache structure, a new out-of-order execution sequence, and over 100 new hardware instructions that deliver better per-thread performance, especially for database, WebSphere, and Linux workloads. The base z114 starts at 26 MIPS but can scale to over 3100 MIPS across five central processors and the additional horsepower provided by its specialty engines.

The z114 mainly will be a consolidation play. IBM calculates that workloads from as many as 300 competitive servers can be consolidated onto a single z114. IBM figures the machine can handle workloads from 40 Oracle server cores using just three processors running Linux. And compared to the Oracle servers IBM estimates the new z114 will cost 80% less. Similarly, IBM figures that a fully configured z114 running Linux on z can create and maintain a Linux virtual server for approximately $500 per year.

As a consolidation play, the zEnterprise System will get even more interesting later this year when x86 blades supporting Windows become available. Depending on the pricing, the z114 could become a Windows consolidation play too.

Today even midrange enterprises are multi-platform shops. For this, the z114 connects to the zBX, a blade expansion cabinet, where it can integrate and manage workloads running on POWER7-based blades as well as the IBM Smart Analytics Optimizer and WebSphere DataPower blades for integrating web-based workloads. In addition, IBM promises support for Microsoft Windows on select System x server blades soon.

To achieve a low TCA, IBM clearly is ready to make deals. For example, IBM also has lowered software costs to deliver the same capacity for 5-18% less through a revised Advanced Workload License Charges (AWLC) pricing schedule.  A new processor value unit (PVU) rating on IFLs can lower Linux costs as much as 48%.

The best deal, however, usually comes through the System z Solution Edition Program, which BottomlineIT’s sister blog, DancingDinosaur, has covered here and here.  It bundles System z hardware, software, middleware, and three years of maintenance into a deeply discounted package price. Initial System Editions for the z114 will be WebSphere, Linux, and probably SAP.

IFG also can lower costs, starting with a six month payment deferral. You can acquire a z114 now but not begin paying for it until the next year. The group also is offering all IBM middleware products, mainly WebSphere Application Server and Tivoli, interest free (0%) for twelve months. Finally, IFG can lower TCA through leasing. Leasing could further reduce the cost of the z114 by up to 3.5% over three years.

By the time you’ve configured the z114 the way you want it and netted out the various discounts, even with a Solutions Edition package, it will probably cost more than $75,000. Even the most expensive HP Itanium server beats that. As soon as there are multiple servers in a consolidation play, that’s where the z114 payback lies.

, , , , , , , , , , , , , , , , , , , , ,

Leave a comment

Next Up: Dynamic Data Warehousing

Enterprise data warehousing (EDW) has been around for well over a decade.  IBM has been long promoting it across all its platforms. So have Oracle and HP and many others.

The traditional EDW, however, has been sidelined even at a time when data is exploding at a tremendous rate and new data types, from sensor data to smartphone and social media data to video data are becoming common. IBM recently projected a 44-fold increase in data and content, reach 35 zettabytes by 2020. In short, the world of data has changed dramatically since organizations began building conventional data warehouses. Now the EDW should accommodate these new types of data and be flexible enough to handle rapidly changing forms of data.

Data warehousing as it is mainly practiced today is too complex, difficult to deploy, requires too much tuning, and is too inefficient when it comes to bringing in analytics, which delays delivering the answers from the EDW that business managers need, observed Phil Francisco,  VP at Netezza, an IBM acquisition that makes data warehouse appliances. And without fast analytics to deliver business insights, well, what’s the point?

In addition, the typical EDW requires too many people to maintain and administer, which makes it too costly, Francisco continued. Restructuring the conventional EDW to accommodate new data types and new data formats—in short, a new enterprise data model—is a mammoth undertaking that companies wisely shy away from. But IBM is moving beyond basic EDW to something Francisco describes as an enterprise data hub, which entails an enterprise data store surrounded by myriad special purpose data marts and special purpose processors for various analytics and such.

IBM’s recommendation: evolve the traditional enterprise data warehouse into what it calls the enterprise data hub, a more flexible systems architecture. This will entail consolidating the infrastructure and reducing the data mart sprawl. It also will simplify analytics, mainly by deploying analytic appliances like IBM’s Netezza. Finally, organizations will need data governance and lifecycle management, probably through automated policy-based controls. The result should be better information faster and delivered in a more flexible and cost-effective way.

Ultimately, IBM wants to see organizations build out this enterprise data hub with a variety of BI and analytic engines connected to it for analyzing streamed data and vast amounts of unstructured data of the type Hadoop has shown itself particularly good at handling. BottomlineIT wrote about Hadoop in the enterprise back in February here.

The payback from all of this, according to IBM, will be increased enterprise agility and faster deployment of analytics, which should result in increased business performance. The consolidated enterprise data warehouse also should lower the TCO  for the EDW and speed time to business value. All desirable things, no doubt, but for many organizations this will have require a gradual process and a significant investment in new tools and technologies, from specialized appliances to analytics.

Case in point is Florida Hospital, Orlando, which deployed a z10 mainframe with DB2 10, which provides enhanced temporal data capabilities, with the primary goal of converting its 15 years of clinical patient data into an analytical data warehouse for use in leading edge medical and genetics research. The hospital calls for getting the data up and running on DB2 10 this year and attaching the Smart Analytics Optimizer as an appliance in Q1 2012. Then it can begin cranking up the research analytics.  Top management has bought into this plan for now, but a lot can change in the next year, the earliest the first fruits of the hospital’s analytical medical data exploration are likely to hit.

Oracle has its own EDW success stories here. Hotwire, a leading discount travel site, for example, works with major travel providers to help them fill seats, hotel rooms, and rental cars that would otherwise go unsold. It deployed Oracle’s Exadata Database Machine to improve data warehouse performance and to scale for growing business needs.

IBM does not envision the enterprise data hub as a platform-specific effort. Although EDW runs on IBM’s mainframe much of the activity is steered to the company’s midsize UNIX/Linux Power Systems server platform. Oracle and HP offer x86-based EDW platforms, and HP is actively partnering with Microsoft on its EDW offering.

In an IBM study, 50% business managers complained they don’t have the information they need to do their jobs and 60% of CEOs admitted they need to do a better job of capturing and understanding information rapidly in order to make swift business decisions. That should be a signal to revamp to your EDW now.

, , , , , , , , , , , ,

Leave a comment