Archive for September, 2012

BMC Mainframe Survey Bolsters z-Hybird Computing

For the seventh year, BMC conducted a survey of mainframe shops worldwide. Clearly the mainframe not only isn’t dead but is growing in the shops where it is deployed.  Find a copy of the study here and a video explaining it here.

Distributed systems shops may be surprised by the results but not those familiar with the mainframe. Key results:

  • 90% of respondents consider the mainframe to be a long-term solution, and 50% expect it will attract new workloads.
  • Keeping IT costs down remains the top priority—not exactly shocking—as 69% report cost as a major focus, up from 60% from 2011.
  • 59% expect MIPS capacity to grow as they modernize and add applications to address expanding business needs.
  • More than 55% reported a need to integrate the mainframe into enterprise IT systems comprised of multiple mainframe and distributed platforms.

The last point suggests IBM is on the right track with hybrid computing. Hybrid computing is IBM’s term for extremely tightly integrated multi-platform computing managed from a single console (on the mainframe) as a single virtualized system. It promises significant operational efficiency over deploying and managing multiple platforms separately.

IBM also is on the right track in terms of keeping costs down.  One mainframe trick is to lower costs by enabling organizations to maximize the use of mainframe specialty engines in an effort to reduce consumption of costly GP MIPS.  Specialty engines are processors optimized for specific workloads, such as Java or Linux or databases. The specialty engine advantage continues with the newest zEC12, incorporating the same 20% price/performance boost, essentially more MIPS bang for the buck.

Two-thirds of the respondents were using at least one specialty engine. Of all respondents, 16% were using five or more engines, a few using dozens.  Not only do specialty engines deliver cheaper MIPS but they often are not considered in calculating software licensing charges, which lowers the cost even more.

About the only change noticeable in responses year-to-year is the jump in the respondent ranking of IT priorities. This year Business/IT alignment jumped from 7th to 4th. Priorities 1, 2, and 3 (Cost Reduction, Disaster Recovery, and Application Modernization respectively) remained the same.  Priorities 5 and 6 (Efficient Use of MIPS and Reduced Impact of Outages respectively) fell from a tie for 4th last year.

The greater emphasis on Business/IT alignment isn’t exactly new. Industry gurus have been harping on it for years.  Greater alignment between business and IT also suggests a strong need for hybrid computing, where varied business workloads can be mixed yet still be treated as a single system from the standpoint of efficiency management and operations. It also suggests IT needs to pay attention to business services management.

Actually, there was another surprise. Despite the mainframe’s reputation for rock solid availability and reliability, the survey noted that 39% of respondents reported unplanned outages. The primary causes for the outages were hardware failure (cited by 31% of respondents), system software failure (30%), in-house app failure (28%), and failed change process (22%). Of the respondents reporting outages, only 10% noted that the outage had significant impact. This was a new survey question this year so there is no comparison to previous years.

Respondents (59%) expect MIPS usage to continue to grow. Of that growth, 31% attribute it to increases in legacy and new apps while 9% attributed it to new apps. Legacy apps were cited by 19% of respondents.

In terms of modernizing apps, 46% of respondents planned to extend legacy code through SOA and web services while 43% wanted to increase the flexibility and agility of core apps.  Thirty-four percent of respondents hoped to reduce legacy app support costs through modernization.

Maybe the most interesting data point came where 60% of the respondents agreed that the mainframe needed to be a good IT citizen supporting varied workloads across the enterprise. That’s really what zEnterprise hybrid computing is about.

, , , , , , , , , ,

Leave a comment

Coping with Increased Data Center Complexity

Last week Symantec, a leading data center software tools provider, released its annual state of the data center survey results. You can view the full report here. The overriding issue, it turns out, is the increasing complexity of the data center. As CIO you’re probably aware of this, but there seems to be little you can do except request more budget and more resources. Or is there?

Although the study cites a number of factors driving data center complexity, survey respondents appear to focus in on one primary response, an increased an increased need for governance. This is not something a CIO would typically initiate. Also suggested is taking steps to intelligently manage organizational resources in an effort to rein in operational costs and control information growth.

More specifically, Symantec suggests that organizations implement controls such as standardization or establish an information governance strategy to keep information from becoming a liability. Nobody doubts that the seemingly unrestrained proliferation of data and of systems that generate it and use it are driving data center complexity.  But don’t blame IT alone; it is the business that is demanding everything from mobility to analytics.

The leading complexity driver, cited by 65% of the respondents, turns out to be the increasing number of business-critical applications. Other key drivers of complexity include growth in the volume of data, mobile computing, server virtualization, and cloud computing.

Organizations may benefit from mobile computing and the efficiency and agility that result from virtualization and cloud computing, but these capabilities don’t come without a cost. In fact, the most commonly mentioned impact was higher costs, cited by nearly half of the or­ganizations surveyed, as an effect of complexity. Without budgets increasing commensurately, organizations gain valuable capabilities in one area only by constraining activity in other areas.

Other impacts cited by respondents include: reduced agility (cited by 39% of respondents); longer lead times for storage migration (39%) and provisioning storage (38%); longer time to find informa­tion (37%); security breaches (35%); lost or misplaced data (35%); increased downtime (35%); and compliance incidents (34%).

Increase downtime should raise a few eyebrows. In a modern enterprise when systems go down work and productivity essentially slow to a halt. Some workers can improvise for a while but they can only go so far.  The survey found the typical organization experienced an average of 16 data center outages in the past 12 months, at a total cost of $5.1 million. The most common cause was systems failures followed by human error and natural disasters.

According to the survey, organizations are implementing several measures to reduce complexity, including training, standardization, centralization, virtualization, and increased budgets. The days of doing more with less should be over for now as far as the data center is concerned: 63% consider increasing their budget to be somewhat or extremely important in dealing with data center complexity.

But the biggest initiative organizations are undertaking is to implement a comprehensive information governance strategy; defined as a formal program that allows organizations to proactively classify, retain, and discover information in order to reduce information risk, reduce the cost of managing information, establish retention policies, and streamline the eDiscovery process. Fully 90% of organizations are either discussing information governance or have implemented trials or actual programs.

While there are technology tools to assist with data center governance, this is not an issue that responds to an IT solution. This kind of governance mostly requires meetings among the business and IT to hash out the ownership and responsibility for various data, establish policies and procedures, and then lay out monitoring and enforcement. None of this is rocket science, but it does take time and resources.

Symantec goes on to make the following recommendations:

  • Establish C-level ownership of information governance.
  • Get visibility beyond IT platforms down to the actual business services.
  • Understand what IT assets you have, how they are being consumed, and by whom.
  • Reduce the number of backup applications to meet recovery SLAs.
  • Deploy deduplication everywhere to help constrain the information explosion.
  • Use appliances to simplify server and storage operations across physical and virtual machines.

You can also rationalize systems by eliminating redundant or unused applications, consolidate the system and vendors who provide them to a small handful, and standardize on a few platforms and operating systems. Strategies like BYOD, in that case, become a prescription for complexity.

The world in general is becoming more complex, and this is especially apparent in the data center due to increasing demands by the business for various IT services and the need to manage ever-growing amounts of information. Unless you take steps to rein it in, it will only get worse.

, , , , , ,

Leave a comment

Meet the Newest Mainframe—zEnterprise EC12

Last month IBM launched the zEnterprise EC12 (zEC12). As you would expect from the next release of the top-of-the-line mainframe, the zEC12 delivers faster speed and better price/performance. With a 5.5 GHz core processor, up from 5.2 GHz in its predecessor (z196) and an increase in the number of cores per chip (from 4 to 6) IBM calculates it delivers 50% more total capacity in the same footprint. The vEC12 won’t come cheap but on a cost per MIPS basis it’s probably the best value around.

More than just performance, it adds two major new capabilities, IBM zAware and Flash Express, and a slew of other hardware and software optimizations. The two new features, IBM zAware and Flash Express, both promise to be useful, but neither is a game changer. zAware is an analytics capability embedded in firmware. It is intended to monitor the entire zEnterprise system for the purpose of identifying problems before they impact operations.

Flash Express consists of a pair of memory cards installed in the zEC12; what amounts to a new tier of memory. Flash Express is designed to streamline memory paging when transitioning between workloads. It will moderate workload spikes and eliminate the need to page to disk, which should boost performance.

This machine is intended, initially, for shops with the most demanding workloads and no margin for error. The zEC12 also continues IBM’s hybrid computing thrust by including the zBX and new capabilities from System Director to be delivered through Unified Resource Manager APIs for better management of virtualized servers running on zBX blades.

This is a stunningly powerful machine, especially coming just 25 months after the z196 introduction. The zEC12 is intended for optimized corporate data serving. Its 101 configurable cores deliver a performance boost for all workloads. The zEC12 also comes with the usual array of assist processors, which are just configurable cores with the assist personality loaded on. Since they are zEC12 cores, they bring a 20% MIPS price/performance boost.

The directly competitive alternatives from the other (non-x86) server vendors are pretty slow by comparison. Oracle offers its top SPARC-based T4 server that features a 3.0 GHz processor. HP’s Integrity Superdome comes with the Itanium processor and tops out at 1.86 GHz. No performance rivals here, at least until each vendor refreshes its line.

For performance, IBM estimates up to a 45% improvement in Java workloads, up to a 27% improvement in CPU-intensive integer and floating point C/C++ applications, up to 30% improvement in throughput for DB2 for z/OS operational analytics, and more than 30% improvement in throughput for SAP workloads. IBM has, in effect, optimized the zEC12 from top to bottom of the stack. DB2 applications are certain to benefit as will WebSphere and SAP.

IBM characterizes zEC12 pricing as follows:

  • Hardware—20% MIPS price/performance improvement for standard engines and specialty engines , Flash Express runs $125,000 per pair of cards (3.2 TB)
  • Software—update pricing will provide 2%-7% MLC price/performance for flat-capacity upgrades from z196, and IFLs will maintain PVU rating of 120 for software  yet deliver more 20% MIPS
  • Maintenance—no less than 2% price performance improvement for standard MIPS and 20% on IFL MIPS

IBM is signaling price aggressiveness and flexibility to attract new shops to the mainframe and stimulate new workloads. The deeply discounted Solution Edition program will include the new machine. IBM also is offering financing with deferred payments through the end of the year in a coordinated effort to move these machines now.

As impressive as the zEC12 specifications and price/performance is BottomlineIT is most impressed by the speed at which IBM delivered the machine. It broke with its with its historic 3-year release cycle to deliver this potent hybrid machine 25 months after the z196 first introduced hybrid computing.

, , , , , , , ,

Leave a comment