Posts Tagged Power Systems

Big Data and Analytics as Game Changing Technology

If you ever doubted that big data was going to become important, there should be no doubt anymore. Recent headlines from the past couple of weeks of the government capturing and analyzing massive amounts of daily phone call data should convince you.

That this report was shortly followed by more reports of the government tapping the big online data websites like Google, Yahoo, and such for even more data should alert you to three things:

1—There is a massive amount of data out there that can be collected and analyzed.

2—Companies are amassing incredible volumes of data in the normal course of serving people who readily and knowingly give their data to these organizations. (This blogger is one of those tens of million .)

3—The tools and capabilities are mature enough for someone to sort through that data and connect the dots to deliver meaningful insights.

Particularly with regard to the last point this blogger thought the industry was still five years away from generating meaningful results from that amount of data coming in at that velocity. Sure, marketers have been sorting and correlating large amounts of data for years, but it was mostly structured data and not at nearly this much. BTW, your blogger has been writing about big data for some time.

If the news reports weren’t enough it became clear at IBM Edge 2013, wrapping up is Las Vegas this week, that big data analytics is happening and companies and familiar companies are succeeding at it now. It also is clear that there is sufficient commercial off-the-shelf computing power from companies like IBM and others and analytics tools from a growing number of vendors to sort through massive amounts of data and make sense of it fast.

An interesting point came up in one of the many discussions at Edge 2013 touching on big data. Every person’s data footprint is as unique as a fingerprint or other bio-metrics. We all visit different websites and interact with social media and use our credit and debit cards in highly individual ways. Again, marketers have sensed this at some level for years, but they haven’t yet really honed it down to the actual individual on a mass scale, although there is no technical reason one couldn’t. You now can, in effect, market to a demographic of one.

A related conference is coming up Oct. 21-25 in Orlando, Fl., called Enterprise Systems 2013.  It will combine the System z and the Power System Technical University along with a new executive-focused Enterprise Systems event. It will include new announcements, peeks into trends and directions, over 500 expert technical sessions across 10 tracks, and a comprehensive solution center. This blogger has already put it on his calendar.

There was much more interesting information at Edge 2013, such as using data analytics and cognitive computing to protect IT systems.  Perimeter defense, anti-virus, and ID management are no longer sufficient. Stay tuned.

Advertisements

, , , , , , , , , , , ,

Leave a comment

IBM Bolsters Its Midrange Power Servers

IBM’s Systems and Technology Group (STG) introduced a slew of new products and enhancements this week, both hardware and software, for its midrange server lineup, the Power Systems products. The Power announcements covered new capabilities as well as new machines. And all the announcements in one way or another addressed IBM’s current big themes: Cloud, Analytics, and Security. The net-net of all these announcements: more bang for the buck in terms of performance, flexibility, and efficiency.

Of the new Power announcements the newest processor, Power7+, certainly was the star.  Other capabilities, such as elastic capacity on demand and dynamic Power system pools, may prove more important in the long run. Another new announcement, the EXP30 Ultra SSD I/O Drawer, may turn out quite useful as organizations appreciate the possibilities of SSD and ramp up usage.

Power7+, with 2 billion transistors, promises to deliver 40% more performance, especially for Java workloads, compared to Power7.  Combined with other enhancements Power announced, it looks particularly good for data and even real-time analytics workloads.  The new processor boasts 4.4 GHz speeds, a 10MB L3 cache per core (8 cores =80 MB), and a random number generator along with enhanced single precision floating point performance and an enhanced GX system bus. IBM invested the additional transistors primarily in the cache. All of this will aid performance and efficiency.

The enhanced chip also brings an active memory expansion accelerator and an on-chip encryption accelerator for AIX. Previously this was handled in software; now it is done in hardware for better performance and efficiency.  Power7+ also can handle 20 VMs per core, double the number of Power7 VMs. This allows system administrators to make VM partitions, especially development partitions, quite small (just 5% of the core).  With energy enhancements, it also delivers 5x more performance per watt. New power gating also allows the chip to be configured in a variety of ways. The upshot: more flexibility.

Other new capabilities include elastic Capacity on Demand (CoD) and Power System Pools work hand in hand.  Depending on the server model, you can create what amounts to a huge pool of shared system resources, either permanently or temporarily. IBM has offered versions of capacity on demand for years, but they typically entailed elaborate set up and cumbersome activation routines to make the capacity available.  Again, depending on the model IBM is promising more flexible CoD and easier activation, referring to it as instant elasticity.  If it works as described, you should be able to turn multiple Power servers into a massive shared resource. Combine these capabilities to create a private cloud based on these new servers and you could end up with a rapidly expandable private cloud. Usually, it would take a hybrid cloud for that kind of expansion, and even that is not necessarily simple to set up.

There are, however, limitations to elastic CoD and Power Systems Pool. An initial quantity of CoD credits are offered only with new Power 795 and Power 780 (a Power7+ machine). There also is a limit of 10 Power 795 and/or 780 servers in a pool.

Enterprises are just starting to familiarize themselves with SSD, what it can do for them, and how best to deploy. The EXP30 Ultra SSD I/O Drawer, scheduled for general release in November, should make it easier to include SSD in an enterprise infrastructure strategy using the GX++ bus. The 1U drawer can hold up to 30 SSD drives (387 GB) in that small footprint.  That’s a lot of resource in a tight space: 11.6 TB of capacity, 480,000 read IOPS, and 4.5 GB/s of aggregate bandwidth. IBM reports that it can cut batch window processing by up to 50% and reduce the number of HDD by up to 10x. Plus, you can still attach up to 48 HDD downstream for another 43 TB.

And these just touch on some of what IBM packed into the Oct. 3 announcement. BottomlineIT will look at other pieces of the Power announcement, from enhancements of PowerVM to PowerSC for security and compliance as well as look at the enhancements made to zEC12 software. Stay tuned.

, , , , , ,

Leave a comment

Amid Poor Earnings HP Launches Gen8 Servers

The bad decisions HP has made, especially announcements to kill WebOS and its tablet devices and its decision to get out of the PC business, have finally hit home.  The company’s 1Q2012 financials were dismal.  Revenue was down 7% while earning per share dropped 32%.

Still, don’t write off HP so quickly. Just last month HP announced a new line of servers, the HP ProLiant Generation 8 (Gen8). These servers represent an effort to redefine data center economics by automating every aspect of the server life cycle and spawned a new systems architecture called HP ProActive Insight architecture, which will span the entire HP Converged Infrastructure. HP clearly continues to play the game.

In fact, HP is adding features into the Gen8 servers, such as integrated lifecycle automation that it estimates can save 30 days of admin time each year per admin; dynamic workload acceleration, which can boost performance 7x; and automated energy optimization, which HP promises will nearly double compute-per-watt capacity, thereby saving an estimated $7 million in energy costs in a typical data center over three years.

Compared to the HP results, IBM had a good quarter, announcing fourth-quarter 2011 diluted earnings of $4.62 per share, compared with diluted earnings of $4.18 per share in the fourth quarter of 2010, an increase of 11%. Fourth-quarter net income was $5.5 billion compared with $5.3 billion in the fourth quarter of 2010, an increase of 4%. Operating (non-GAAP) net income was $5.6 billion compared with $5.4 billion in the fourth quarter of 2010, an increase of 5%. All this despite a weak quarter for its hardware group, which reported revenues of $5.8 billion for the quarter, down 8% from Q4 2010. The group’s pre-tax income was $790 million, a decrease of 33% due mainly to unexpectedly weak mainframe sales following a streak of record setting mainframe quarterly gains.

But still Gartner found IBM tops among all servers in Q4 2011 and #1 in the market for UNIX servers with 52.8% market share in that same quarter. IBM increased quarterly revenues by 17% year over year with IBM Power Systems and improved its share in comparison to Q4 2010 by 10.9%. For the full year of 2011, IBM led the UNIX server market with 45.9& market share, a gain of 6.9 points over 2010. IBM grew UNIX revenues by 23% over 2010, according to Gartner.

IBM also led the market for servers costing more than $250,000, attaining 69.4% factory revenue share in the fourth quarter with IBM System z mainframes and Power Systems. IBM also led this market for the full year of 2011 with 8% revenue growth over 2010, capturing 63.7% market share.

Meanwhile, IBM announced 570 competitive displacements in 4Q 2011 alone and nearly 2,400 competitive displacements in 2011for its servers and storage systems. For Power, it had more than 350 competitive displacements in 4Q alone, which resulted in over $350 million of business. Roughly 60% of the displacement by Power came from HP. Overall, almost 40% of the 2,400 displacements came from HP and more than 25% came from Oracle/Sun, another company that has struggled to get its product strategy on track.  IBM reports the competitive displacements in 2011 generated over $1 billion of business. 

IBM spent much of 2010 optimizing its Power Systems lineup, the latest optimized for data-intensive workloads, and buyers responded. The POWER7 processor offers 4, 6 or 8 cores per socket and up to four threads per core. With a 4.25 GHz top processor speed and an integrated eDRAM L3 cache these systems can fly.  In fact, IBM reports Power grew 6%, the fifteenth consecutive quarter of share gains in Power.

In other achievements from IBM’s Systems Group its x86 machines, the System x, scored a benchmark success with a world-record 4-processor result for Linux on the two-tier SAP Sales and Distribution (SD) standard application benchmark. This was achieved with an IBM System x 3850 X5, running IBM DB2 9.7, Red Hat Enterprise Linux 6.2, and SAP enhancement package 4 for the SAP ERP application Release 6.0. Specifically, the x3850 X5 achieved 12,560 SAP SD benchmark users with 0.99 seconds average dialog response, 68,580 SAPS measured throughput of 4,115,000 dialog steps per hour (or 1,371,670 fully processed line items per hour), and an average CPU utilization of 98% for the central server.

Ironically, the previous best four-processor result, 12,204 SAP SD benchmark users on Linux, was achieved by the HP ProLiant DL580 G7.  The new benchmark-winning IBM x3850 X5 was configured with four Intel Xeon E7-8870 processors at 2.40GHz with 30MB shared L3 cache per processor (4 processors/40 cores/80 threads), 512GB of memory, 64-bit DB2 9.7, Red Hat Enterprise Linux 6.2, and SAP enhancement package 4 for SAP E.

IBM is not hesitating to press its advantage.  Besides its Migration Factory to facilitate migration to IBM platforms, it announced services designed to help companies upgrade IT infrastructures in the face of technology challenges like exponentially larger data volumes, server sprawl, increasingly complex infrastructures, and flat budgets. These include new financing options for those wanting to migrate from HP or Oracle/Sun technologies, including 0% financing on two key IBM systems families: IBM Power Systems and IBM System Storage. Specifically, through March 2012, organizations in the US or Canada can finance (12-month full pay-out lease) between $5,000 and $1 million in Power Systems and/or System Storage technologies at 0%.

As BottomlineIT sees it, this kind of competition is only good for companies that depend on IT.

, , , , , , ,

Leave a comment

Next Up: Dynamic Data Warehousing

Enterprise data warehousing (EDW) has been around for well over a decade.  IBM has been long promoting it across all its platforms. So have Oracle and HP and many others.

The traditional EDW, however, has been sidelined even at a time when data is exploding at a tremendous rate and new data types, from sensor data to smartphone and social media data to video data are becoming common. IBM recently projected a 44-fold increase in data and content, reach 35 zettabytes by 2020. In short, the world of data has changed dramatically since organizations began building conventional data warehouses. Now the EDW should accommodate these new types of data and be flexible enough to handle rapidly changing forms of data.

Data warehousing as it is mainly practiced today is too complex, difficult to deploy, requires too much tuning, and is too inefficient when it comes to bringing in analytics, which delays delivering the answers from the EDW that business managers need, observed Phil Francisco,  VP at Netezza, an IBM acquisition that makes data warehouse appliances. And without fast analytics to deliver business insights, well, what’s the point?

In addition, the typical EDW requires too many people to maintain and administer, which makes it too costly, Francisco continued. Restructuring the conventional EDW to accommodate new data types and new data formats—in short, a new enterprise data model—is a mammoth undertaking that companies wisely shy away from. But IBM is moving beyond basic EDW to something Francisco describes as an enterprise data hub, which entails an enterprise data store surrounded by myriad special purpose data marts and special purpose processors for various analytics and such.

IBM’s recommendation: evolve the traditional enterprise data warehouse into what it calls the enterprise data hub, a more flexible systems architecture. This will entail consolidating the infrastructure and reducing the data mart sprawl. It also will simplify analytics, mainly by deploying analytic appliances like IBM’s Netezza. Finally, organizations will need data governance and lifecycle management, probably through automated policy-based controls. The result should be better information faster and delivered in a more flexible and cost-effective way.

Ultimately, IBM wants to see organizations build out this enterprise data hub with a variety of BI and analytic engines connected to it for analyzing streamed data and vast amounts of unstructured data of the type Hadoop has shown itself particularly good at handling. BottomlineIT wrote about Hadoop in the enterprise back in February here.

The payback from all of this, according to IBM, will be increased enterprise agility and faster deployment of analytics, which should result in increased business performance. The consolidated enterprise data warehouse also should lower the TCO  for the EDW and speed time to business value. All desirable things, no doubt, but for many organizations this will have require a gradual process and a significant investment in new tools and technologies, from specialized appliances to analytics.

Case in point is Florida Hospital, Orlando, which deployed a z10 mainframe with DB2 10, which provides enhanced temporal data capabilities, with the primary goal of converting its 15 years of clinical patient data into an analytical data warehouse for use in leading edge medical and genetics research. The hospital calls for getting the data up and running on DB2 10 this year and attaching the Smart Analytics Optimizer as an appliance in Q1 2012. Then it can begin cranking up the research analytics.  Top management has bought into this plan for now, but a lot can change in the next year, the earliest the first fruits of the hospital’s analytical medical data exploration are likely to hit.

Oracle has its own EDW success stories here. Hotwire, a leading discount travel site, for example, works with major travel providers to help them fill seats, hotel rooms, and rental cars that would otherwise go unsold. It deployed Oracle’s Exadata Database Machine to improve data warehouse performance and to scale for growing business needs.

IBM does not envision the enterprise data hub as a platform-specific effort. Although EDW runs on IBM’s mainframe much of the activity is steered to the company’s midsize UNIX/Linux Power Systems server platform. Oracle and HP offer x86-based EDW platforms, and HP is actively partnering with Microsoft on its EDW offering.

In an IBM study, 50% business managers complained they don’t have the information they need to do their jobs and 60% of CEOs admitted they need to do a better job of capturing and understanding information rapidly in order to make swift business decisions. That should be a signal to revamp to your EDW now.

, , , , , , , , , , , ,

Leave a comment

Open Source KVM Takes on the Hypervisor Leaders

The hypervisor—software that allocates and manages virtualized system resources—usually is the first thing that comes to mind when virtualization comes up. And when IT considers server virtualization the first option typically is VMware ESX, followed by Microsoft’s Hyper-V.

But that shouldn’t be the whole story. Even in the Windows/Intel world there are other hypervisors, such as Citrix Xen.  And IBM has had hypervisor technology for its mainframes for decades and for its Power systems since the late 1990s. A mainframe (System z) running IBM’s System z hypervisor, z/VM, can handle over 1000 virtual machines while delivering top performance and reliability.

So, it was significant when IBM announced in early May that it and Red Hat, an open source technology leader, are working together to make products built around the Kernel-based Virtual Machine (KVM) open source hypervisor for the enterprise. Jean Staten Healy, IBM’s Director of Worldwide Cross-IBM Linux, told IT industry analysts that the two companies together are committed to driving adoption of the open source virtualization technology through joint development projects and enablement of the KVM ecosystem.

Differentiating this approach from those taken by the current x86 virtualization leaders VMware and Microsoft is open source technology. An open source approach to virtualization, Healy noted, lowers costs, enables greater interoperability, and increases options through multiple sources.

The KVM open source hypervisor allows a business to create multiple virtual versions of Linux and Windows environments on the same server. Larger enterprises can take KVM-based products and combine them with comprehensive management capabilities to create highly scalable and reliable, fully cloud-capable systems that enable the consolidation and sharing of massive numbers of virtualized applications and servers.

Red Hat Enterprise Virtualization, for example, was designed for large scale datacenter virtualization by pairing its centralized virtualization management system and advanced features with the KVM hypervisor. BottomlineIT looked at the Red Hat open source approach a few weeks ago, here.

The open source approach to virtualization also is starting to gain traction. To that end Red Hat, IBM, BMC, HP, Intel, and others joined to form the Open Virtualization Alliance. Its goal is to facilitate  the adoption of open virtualization technologies, especially KVM. It intends do this by promoting examples of customer successes, encourage interoperability, and accelerate the expansion of the ecosystem of third party solutions around KVM. A growing and robust ecosystem around KVM is essential if the open source hypervisor is to effectively rival VMware and Microsoft.

Healy introduced what might be considered the Alliance’s first KVM enterprise-scale success story, IBM’s own Research Compute Cloud (RC2), which is the first large-scale cloud deployed within IBM. In addition to being a proving ground for KVM, RC2 also handles actual IBM internal chargeback based on charges-per-VM hour across IBM. That’s real business work.

RC2 runs over 200 iDataplex nodes, an IBM x86 product, using KVM (90% memory utilization/node). It runs 2000 concurrent instances, is used by thousands of IBM employees worldwide, and provides 100TB of block storage attached to KVM instances via a storage cloud.

KVM was chosen not only to demonstrate the open source hypervisor but because it was particularly well suited to the enterprise challenge. It provides a predictable and familiar environment that required no additional skills, auditable security compliance, and the open source licensing model that kept costs down and would prove cost-effective for large-scale cloud use, which won’t be long in coming. The RC2 team, it seems, already is preparing live migration plans for support of federated clouds. Stay tuned.

, , , , , , , , , , , , , , , ,

1 Comment

Software Problem Solving for Private Clouds

First fault software problem solving (FFSPS) is an old mainframe approach that calls for solving problems as soon as they occur. It’s an approach that has gone out of favor except in classic mainframe data centers, but it may be worth reviving as the IT industry moves toward cloud computing and especially private clouds, for which the zEnterprise (z196 and zBX) is particularly well suited.

The point of Dan Skwire’s book First Fault Software Problem Solving: Guide for Engineers, Managers, and Users, is that FFSPS is an effective approach even today. Troubleshooting after a problem has occurred is time-consuming, more costly, inefficient, and often unsuccessful. Complicating troubleshooting typically is lack of information. As Skwire  notes: if you have to start troubleshooting after the problem occurs, the odds indicate you will not solve the problem, and along the way, you consume valuable time, extra hardware and software, and other measurable resources.

The FFSPS trick is to capture problem solving data from the start. This is what mainframe data centers did routinely. Specially, they used trace tables and included recovery routines. This continues to be the case with z/OS today. Full disclosure: I’m a fan of mainframe computers and Power Systems and follow both regularly in my independent mainframe blog, DancingDinosaur.

So why should IT managers today care about mainframe disciplines like FFSPS? Skwire’s answer: there surely will be greater customer satisfaction if you solve and repair the customer‘s problem, or if he is empowered to solve and repair his own problem rapidly. Another reason is risk minimization.

Skwire also likes to talk about System Yuk. You probably have a few System Yuks in your shop. What’s System Yuk? As Skwire explains, System YUK is very complex. It makes many decisions, and analyzes much data. However, the only means it has of conveying an error is the single message to the operator console: SYSTEM HAS DETECTED AN ERROR, which is not particularly helpful. System YUK has no trace table or FFSPS tools. To diagnose problems in YUK you must re-create the environment in your YUK test-bed, and add instrumentation (write statements, traces, etc) and various tools to get a decent explanation of problems with YUK, or setup some second-fault tool to capture more and better data on the production System YUK, which is high risk.

Toward the end of the book Skwire gets into what you can do about System Yuk. It amounts to a call for defensive programming. He then introduces a variety of tools to troubleshoot and fix software problems. These include: ServiceLink by Axeda, AlarmPoint Systems, LogLogic, IBM Tivoli Performance Analyzer, and CA Technologies’ Wily Introscope.

With the industry gravitating toward private clouds as a way to efficiently deliver IT as a flexible service, the disciplined methodologies that continue to keep the mainframe a still critical platform in large enterprises will be worth adopting.  FFSPS should be one in particular to keep in mind.

, , , , , ,

Leave a comment

Commercializing IBM’s Watson Technology

IBM’s Watson-Jeopardy challenge proved to be entertaining theater but it left hanging the question of how a business could capitalize on Watson technology. IBM almost immediately began suggesting workloads that would apply in healthcare, finance, and elsewhere. BottmlineIT discussed some of these back in early March.

The scale of the hardware Watson required for Jeopardy, however, went beyond what most businesses could or would acquire.  Not many businesses are likely to configure 90 tightly integrated IBM Power 750 servers containing 2880 POWER7 processor cores as well as the 15TB of onboard memory for a single workload as was the configuration for Watson when it won Jeopardy.

This week we got the answer. IBM introduced new optimized POWER7 products, both upgraded servers and new blades, capable of running Watson-like workloads. These products, according to IBM, actually provide a performance kick beyond the Power 750 used with Watson.

To do what Watson did—process complex natural language queries and come up with the right answer extremely fast—is not a job for wimpy servers. It still requires high end servers, not commodity x86 Wintel machines.

IBM leads the high end UNIX server market, growing revenue in that segment by 12%, according to IDC,. The researcher’s overall discussion of the Q4 2010 worldwide server market is a little more nuanced vis-à-vis HP than IBM presents but IDC still declares IBM the leader in the worldwide server systems market with 37.4% market share in factory revenue for 4Q10.

But the real questions revolve around when and how commercial businesses are going to deploy Watson technology. Along with announcing the new POWER7 servers, IBM introduced an early adopter tapping the new POWER7 technology, if not exactly the Watson capabilities.

That business, RPM Technologies, provides wealth management software to some of the largest banks and financial services companies in Canada.  In terms of the new POWER7 technology, “POWER7 chips along with AIX 6.1 provided a big boost to the batch and threading speed of our products,” said RPM’s chief architect.  With POWER7 chips, batch job runtimes improved by upwards of 35% and used fewer resources, he reported. As part of the upgrade, RPM also moved to a fully virtualized environment across two POWER7 16-core P750 machines, which reduced the time and effort to manage the boxes.

Another early adopter, the University of Massachusetts-Dartmouth, may be more on track to tap Watson-like capabilities. The school’s researchers are using two IBM POWER7 blades to study the effect of cosmic disturbances, called gravitational waves, on black holes in space.

“We are running billions of intense calculations on the POWER7 blades… able to get results as much as eight times faster than running the same calculations on an Intel Xeon processor. Calculations that used to take a month to run are now finished in less than a week”, reported Gaurav Khanna, professor of physics at UMass-Dartmouth. Not fast enough to win Jeopardy but impressive nonetheless.

The new POWER7 products include the following enhancements:

  • Enhanced IBM Power 750 Express—the same system that powers Watson—further optimized with a faster POWER7 processor delivering more than three times the performance of comparable 32-core servers.
  • 16-core, single-wide IBM BladeCenter PS703 and 32-core, double-wide IBM BladeCenter PS704 blade servers, which provide an alternative to sprawling racks.
  • Enhanced IBM Power 755, a high-performance computing cluster node with 32 POWER7 cores and a faster processor.

Along with the servers, IBM announced new switches closely integrated with its Power servers to support workloads such as cloud computing, financial services, Web 2.0, streaming video, medical and scientific research, and business analytics. According to a recent report by The Tolly Group, the new IBM switches demonstrated an average of 55% better price/performance over comparable switches.

So, everyone is still waiting for a business user that actually is tapping Watson-like capabilities to address a business problem. It will happen. As you know, it takes time to get systems implemented, tested, and put into production.  Stay tuned.

, , , , , ,

Leave a comment