Posts Tagged hybrid computing

Change-proof Your Organization

Many organizations are being whiplashed by IT infrastructure change—costly, disruptive never changes that are hindering IT and the organization.  You know the drivers: demand for cloud computing, mobile, social, big data, real-time analytics, and collaboration. Don’t forget to add soaring transaction volumes, escalating amounts of data, 24x7x365 processing, new types of data, proliferating forms of storage, incessant compliance mandates, and more keep driving change. And there is no letup in sight.

IBM started to articulate this in a blog post, Infrastructure Matters. IBM was focusing on cloud and data, but the issues go even further. It is really about change-proofing, not just IT but the business itself.

All of these trends put great pressure on the organization, which forces IT to repeatedly tweak the infrastructure or otherwise revamp systems. This is costly and disruptive not just to IT but to the organization.

In short, you need to change-proof your IT infrastructure and your organization.  And you have to do it economically and in a way you can efficiently sustain over time. The trick is to leverage some of the very same  technology trends creating change to design an IT infrastructure that can smoothly accommodate changes both known and unknown. Many of these we have discussed in BottomlineIT previously:

  • Cloud computing
  • Virtualization
  • Software defined everything
  • Open standards
  • Open APIs
  • Hybrid computing
  • Embedded intelligence

These technologies will allow you to change your infrastructure at will, changing your systems in any variety of ways, often with just a few clicks or tweaks to code.  In the process, you can eliminate vendor lock-in and obsolete, rigid hardware and software that has distorted your IT budget, constrained your options, and increased your risks.

Let’s start by looking at just the first three listed above. As noted above, all of these have been discussed in BottomlineIT and you can be sure they will come up again.

You probably are using aspects of cloud computing to one extent or another. There are numerous benefits to cloud computing but for the purposes of infrastructure change-proofing only three matter:  1) the ability to access IT resources on demand, 2) the ability to change and remove those resources as needed, and 3) flexible pricing models that eliminate the upfront capital investment in favor of paying for resources as you use them.

Yes, there are drawbacks to cloud computing. Security remains a concern although increasingly it is becoming just another manageable risk. Service delivery reliability remains a concern although this too is a manageable risk as organizations learn to work with multiple service providers and arrange for multiple links and access points to those providers.

Virtualization remains the foundational technology behind the cloud. Virtualization makes it possible to deploy multiple images of systems and applications quickly and easily as needed, often in response to widely varying levels of service demand.

Software defined everything also makes extensive use of virtualization. It inserts a virtualization layer between the applications and the underlying infrastructure hardware.  Through this layer the organization gains programmatic control of the software defined components. Most frequently we hear about software defined networks that you can control, manage, and reconfigure through software running on a console regardless of which networking equipment is in place.  Software defined storage gives you similar control over storage, again generally independent of the underlying storage array or device.

All these technologies exist today at different stages of maturity. Start planning how to use them to take control of IT infrastructure change. The world keeps changing and the IT infrastructures of many enterprises are groaning under the pressure. Change-proofing your IT infrastructure is your best chance of keeping up.

, , , , , , , , , , , , , , ,

1 Comment

BMC Mainframe Survey Bolsters z-Hybird Computing

For the seventh year, BMC conducted a survey of mainframe shops worldwide. Clearly the mainframe not only isn’t dead but is growing in the shops where it is deployed.  Find a copy of the study here and a video explaining it here.

Distributed systems shops may be surprised by the results but not those familiar with the mainframe. Key results:

  • 90% of respondents consider the mainframe to be a long-term solution, and 50% expect it will attract new workloads.
  • Keeping IT costs down remains the top priority—not exactly shocking—as 69% report cost as a major focus, up from 60% from 2011.
  • 59% expect MIPS capacity to grow as they modernize and add applications to address expanding business needs.
  • More than 55% reported a need to integrate the mainframe into enterprise IT systems comprised of multiple mainframe and distributed platforms.

The last point suggests IBM is on the right track with hybrid computing. Hybrid computing is IBM’s term for extremely tightly integrated multi-platform computing managed from a single console (on the mainframe) as a single virtualized system. It promises significant operational efficiency over deploying and managing multiple platforms separately.

IBM also is on the right track in terms of keeping costs down.  One mainframe trick is to lower costs by enabling organizations to maximize the use of mainframe specialty engines in an effort to reduce consumption of costly GP MIPS.  Specialty engines are processors optimized for specific workloads, such as Java or Linux or databases. The specialty engine advantage continues with the newest zEC12, incorporating the same 20% price/performance boost, essentially more MIPS bang for the buck.

Two-thirds of the respondents were using at least one specialty engine. Of all respondents, 16% were using five or more engines, a few using dozens.  Not only do specialty engines deliver cheaper MIPS but they often are not considered in calculating software licensing charges, which lowers the cost even more.

About the only change noticeable in responses year-to-year is the jump in the respondent ranking of IT priorities. This year Business/IT alignment jumped from 7th to 4th. Priorities 1, 2, and 3 (Cost Reduction, Disaster Recovery, and Application Modernization respectively) remained the same.  Priorities 5 and 6 (Efficient Use of MIPS and Reduced Impact of Outages respectively) fell from a tie for 4th last year.

The greater emphasis on Business/IT alignment isn’t exactly new. Industry gurus have been harping on it for years.  Greater alignment between business and IT also suggests a strong need for hybrid computing, where varied business workloads can be mixed yet still be treated as a single system from the standpoint of efficiency management and operations. It also suggests IT needs to pay attention to business services management.

Actually, there was another surprise. Despite the mainframe’s reputation for rock solid availability and reliability, the survey noted that 39% of respondents reported unplanned outages. The primary causes for the outages were hardware failure (cited by 31% of respondents), system software failure (30%), in-house app failure (28%), and failed change process (22%). Of the respondents reporting outages, only 10% noted that the outage had significant impact. This was a new survey question this year so there is no comparison to previous years.

Respondents (59%) expect MIPS usage to continue to grow. Of that growth, 31% attribute it to increases in legacy and new apps while 9% attributed it to new apps. Legacy apps were cited by 19% of respondents.

In terms of modernizing apps, 46% of respondents planned to extend legacy code through SOA and web services while 43% wanted to increase the flexibility and agility of core apps.  Thirty-four percent of respondents hoped to reduce legacy app support costs through modernization.

Maybe the most interesting data point came where 60% of the respondents agreed that the mainframe needed to be a good IT citizen supporting varied workloads across the enterprise. That’s really what zEnterprise hybrid computing is about.

, , , , , , , , , ,

Leave a comment

Meet the Newest Mainframe—zEnterprise EC12

Last month IBM launched the zEnterprise EC12 (zEC12). As you would expect from the next release of the top-of-the-line mainframe, the zEC12 delivers faster speed and better price/performance. With a 5.5 GHz core processor, up from 5.2 GHz in its predecessor (z196) and an increase in the number of cores per chip (from 4 to 6) IBM calculates it delivers 50% more total capacity in the same footprint. The vEC12 won’t come cheap but on a cost per MIPS basis it’s probably the best value around.

More than just performance, it adds two major new capabilities, IBM zAware and Flash Express, and a slew of other hardware and software optimizations. The two new features, IBM zAware and Flash Express, both promise to be useful, but neither is a game changer. zAware is an analytics capability embedded in firmware. It is intended to monitor the entire zEnterprise system for the purpose of identifying problems before they impact operations.

Flash Express consists of a pair of memory cards installed in the zEC12; what amounts to a new tier of memory. Flash Express is designed to streamline memory paging when transitioning between workloads. It will moderate workload spikes and eliminate the need to page to disk, which should boost performance.

This machine is intended, initially, for shops with the most demanding workloads and no margin for error. The zEC12 also continues IBM’s hybrid computing thrust by including the zBX and new capabilities from System Director to be delivered through Unified Resource Manager APIs for better management of virtualized servers running on zBX blades.

This is a stunningly powerful machine, especially coming just 25 months after the z196 introduction. The zEC12 is intended for optimized corporate data serving. Its 101 configurable cores deliver a performance boost for all workloads. The zEC12 also comes with the usual array of assist processors, which are just configurable cores with the assist personality loaded on. Since they are zEC12 cores, they bring a 20% MIPS price/performance boost.

The directly competitive alternatives from the other (non-x86) server vendors are pretty slow by comparison. Oracle offers its top SPARC-based T4 server that features a 3.0 GHz processor. HP’s Integrity Superdome comes with the Itanium processor and tops out at 1.86 GHz. No performance rivals here, at least until each vendor refreshes its line.

For performance, IBM estimates up to a 45% improvement in Java workloads, up to a 27% improvement in CPU-intensive integer and floating point C/C++ applications, up to 30% improvement in throughput for DB2 for z/OS operational analytics, and more than 30% improvement in throughput for SAP workloads. IBM has, in effect, optimized the zEC12 from top to bottom of the stack. DB2 applications are certain to benefit as will WebSphere and SAP.

IBM characterizes zEC12 pricing as follows:

  • Hardware—20% MIPS price/performance improvement for standard engines and specialty engines , Flash Express runs $125,000 per pair of cards (3.2 TB)
  • Software—update pricing will provide 2%-7% MLC price/performance for flat-capacity upgrades from z196, and IFLs will maintain PVU rating of 120 for software  yet deliver more 20% MIPS
  • Maintenance—no less than 2% price performance improvement for standard MIPS and 20% on IFL MIPS

IBM is signaling price aggressiveness and flexibility to attract new shops to the mainframe and stimulate new workloads. The deeply discounted Solution Edition program will include the new machine. IBM also is offering financing with deferred payments through the end of the year in a coordinated effort to move these machines now.

As impressive as the zEC12 specifications and price/performance is BottomlineIT is most impressed by the speed at which IBM delivered the machine. It broke with its with its historic 3-year release cycle to deliver this potent hybrid machine 25 months after the z196 first introduced hybrid computing.

, , , , , , , ,

Leave a comment

A Choice of Hybrid Systems

No enterprise data center today runs just one platform. They have Intel/Windows or some flavor(s) of UNIX/Linux as their main production systems, but they generally run a mix of platforms and operating systems, even throwing Apple, VMware, and mainframes into the mix.

Organizations end up with this mix of platforms for perfectly understandable reasons, such as acquisitions or to meet special software requirements, but it results in a certain amount of inefficiency and added cost. For example, you need to hire and retain people with multiple skill sets.

Recognizing that situation—even contributing to it with its array of platforms and operating systems—IBM introduced the concept of hybrid computing in 2010 with the zEnterprise-zBX. Through hybrid computing, an organization could run workloads concurrently on multiple hardware platforms and operating systems while managing them as a single logical system. The benefit: simplified operation and management efficiency.

IBM currently offers two hybrid platforms: the zEnterprise-zBX combination and IBM PureSystems appliances starting with PureFlex and PureApplication. Both hybrid platforms are tightly integrated, highly optimized systems that accept a variety of blades. Although there is platform overlap the two hybrid environments do not support exactly the same operating environments.

For example, PureFlex, an IaaS offering, and PureApplication, a PaaS offering brings IBM System i to the hybrid party along with Power and System x, which are supported by the zBX too, but skips the mainframe’s z/OS and z/VM operating environments. You manage the PureSystems hybrid environment with the Flex System Manager (FSM). The zEnterprise-zBX has its own hybrid management tool, the Unified Resource Manager, which looks very similar to FSM.

Despite the similarities bringing both FSM and the Unified Resource Manager together is not going to happen in any foreseeable future. That is the definitive word from Jeff Frey, IBM Fellow and CTO for System z: “Flex Manager and the Unified Resource Manager will not come together,” he told BottomlineIT.

That does not mean the zEnterprise-zBX and PureSystems won’t play nicely together, but they will do so higher up in the IT stack. “We will federate the management at a higher level,” he said. Today, that pretty much means organizations using both platforms, zEnterprise and PureSystems, will have to rely on tools like Tivoli to tie the pieces together and manage them.  At the lower levels in the stack where the hardware lives each platform will still require its own management tooling.

In effect, Tivoli will provide the federation layer and enable higher level, logical management across both systems. When you need to manage some physical aspect of the underlying hardware you still will need platform-specific tools.

IBM has two potential rivals in the hybrid computing space. Oracle/Sun offers a variety of Sun servers that run either Solaris or Windows/Linux x86 operating systems but it has offered no evidence of any interest to tightly integrate and optimize them as IBM has. Similarly, HP could couple HP-UX and Windows/Linux on both its Intel x86 and Itanium servers, but again it has given no evidence of intending to do this.  Instead, both vendors direct hybrid computing discussions to the cloud, where the different systems can play together at an even higher level of abstraction. (IBM also offers a multi-platform cloud environment.)

Meanwhile, IBM is moving forward with the next advances to its hybrid environments. For example, expect some IBM improvements incorporated into PureSystems hardware to make it into the zBX. Similarly, IBM is planning to push zBX scalability beyond the 112 blades the box supports today as well as adding clustering capabilities. The blade count expansion combined with the technology enhancements brought over from PureSystems, Frey hopes, should make clear IBM’s long term commitment to both its hybrid computing platforms.

At the same time, IBM is enhancing PureSystems for the purpose of scaling it beyond its current four appliance limit. This will give it something more like the ensemble approach used with the System z. A System z ensemble is a collection of two to eight mainframes where at least one has a zBX attached. The resources of a zEnterprise ensemble are managed and virtualized as a single pool of resources integrating system and workload management across the multi-system, multi-tier, multi-architecture environment.

With two IBM hybrid computing platforms the hybrid approach is here for real at IBM. The challenge becomes choosing the one best for your shop. Or you can seek to satisfy your hybrid computing needs through the cloud, where you will find IBM along with Oracle, HP, and a slew of others.

, , , , , , , , , , , ,

2 Comments