Archive for January, 2012

Cost per Workload—the New Platform Metric

How best to understand what a computer costs. Total cost of acquisition (TCA) is the price you pay to have it land on the loading dock and get it up and running doing real work. That’s the lowest price, but it is not reflective of what a computer actually costs. Total cost of ownership (TCO) takes the cost of acquisition and adds in the cost of maintenance, support, integration, infrastructure, power/cooling and more for three to five years. Needless to say TCO is higher but more realistic.

BottomlineIT generally shifts the platform cost discussion to  total cost of ownership (TCO) or Fit for Purpose, an IBM approach that looks at the task to which is being applied, the workload. That puts the cost discussion into the context of not just the cost of the hardware/software or the cost of all the additional requirements but  into the context of what you need to achieve what you’re trying to do.  Nobody buys computers at this level for the fun of it.

John Shedletsky, IBM VP of competitive technology, has been dissecting the cost of IBM platforms—the zEnterprise, Power Systems, and distributed x86 platforms—in terms of the workloads being run.  It makes sense; different workloads have different requirements in terms of response or throughput or availability or security or any other number of attributes and will benefit from different machines and configurations.

Most recently, Shedletsky introduced a new workload benchmark for business analytic reports executed in a typical day, called the BI Day Benchmark. Based on Cognos workloads, it looks at the number of queries generated; characterizes them as simple, intermediate, or complex; and scores them in terms of response time, throughput, or an aggregate measure. You can use the resulting data to calculate a cost per workload.

BottomlineIT, as a matter of policy, steers clear of proprietary benchmarks like BI Day.  It is just too difficult to normalize the results across all the variables that can be fudged, making it next to impossible to come up with repeatable results.

A set of cost per workload analyses Shedletsky published back in March here avoids the pitfalls of a proprietary benchmark.  In these analyses he pitted a zEnterprise with a zBX against POWER7 and Intel machines all running multi-core blades.  One analysis looked at running 500 heavy workloads. The hardware and software cost for a system consisting of 56 Intel Blades (8 cores per blade) for a total of 448 cores came to $11.5 million, which worked out to $23k per workload. On the zEnterprise running 192 total cores, the total hardware/software cost was $7.4 million for a cost per workload of $15k. Click on Shedletsky’s report for all the fine print.

Another interesting workload analysis looked at running 28 front end applications.  Here he compared 28 competitive App Server applications on 57 SPARC T3-1B blades with a total of 936 cores at a hardware/software cost of $11.7 million compared to a WebSphere App Server running on 28 POWER7 blades plus 2 Data Power blades in the zBX (zEnterprise) for a total of 224 cores at a hardware/software cost of $4.9 million.  Per workload the zEnterprise cost 58% less.  Again, click on Shedletsky’s report above for the fine print.

Not all of Shedletsky’s analyses come out in favor of IBM’s zEnterprise or even POWER7 systems.  Where they do, however, he makes an interesting observation: since his analyses typically include the full cost of ownership, where z comes out ahead the difference often is not the better platform performance but the cost of labor. He notes that consistent structured zEnterprise management practices consistently combine to lower labor costs.

If fewer people can manage all those blades and cores from a single unified console, the zEnterprise Unified Resource Manager, rather than requiring multiple people learning multiple tools to achieve a comparable level of management, it has to lower the overall cost of operations and the cost per workload.  As much as someone may complain that the entry level zEnterprise, the z114, still starts at $75,000, good administrators cost that much or more.

Shedletsky’s BI Day benchmark may never catch on, but he is correct in that to understand a system’s true cost you have to look at the cost per workload. That is almost sure to lead you to hybrid computing and, particularly, the zEnterprise where you can mix platforms for different workloads running concurrently and manage them all in a structured, consistent way.

, , , , , , , , ,

1 Comment

OVA and oVIRT Drive KVM Success

In the x86 world VMware is the 900-pound hypervisor gorilla. Even Microsoft’s Hyper-V takes a back seat to the VMware hypervisor.  KVM, however, is gaining traction as an open source alternative. Like an open source product, it brings advantages portability, customizability, and low cost.

In terms of overall platform virtualization, the Linux world may be lagging behind Windows in the rate of server virtualization or not, depending on which studies you have been reading.  Regardless, with IBM and Red Hat getting behind the KVM hypervisor in a big way last year, the pace of Linux servers being virtualized should pick up.

The driving of KVM today is being turned over to the Open Virtualization Alliance (OVA), which has made significant gains in attracting participation since its launch last spring. Currently it boasts over 240 members, up from the couple of dozen when BottomlineIT looked at it months ago.

The OVA also has been bolstered by an open virtualization development organization, the oVirt Project here.  Its founding partners include: Canonical, Cisco, IBM, Intel, NetApp, Red Hat and SUSE. The founders promise to deliver a truly open source and openly governed and integrated virtualization stack.  The oVirt team aims to deliver both a cohesive stack and discretely reusable components for open virtualization management, which should become key building blocks for private and public cloud deployments.

 The oVirt Project bills itself as an open virtualization project providing a feature-rich server virtualization management system with advanced capabilities for hosts and guests, including high availability, live migration, storage management, system scheduler, and more. The oVirt goal is to develop a broad ecosystem of tools to make up a complete integrated platform and to deliver them on a well defined release schedule. These are components designed and tested to work together, and oVirt should become a central venue for user and developer cooperation.

The idea around OVA and oVirt is that effective enterprise virtualization requires more than just a hypervisor, noted Jean Staten, IBM Director, Worldwide Cross‐IBM Linux and Open Virtualization, at a recent briefing.  In addition to a feature-rich hypervisor like KVM, Healy cited the need for well-defined APIs at all layers of the stack, readily accessible (reasonably priced) systems and tools, a corresponding feature-rich, heterogeneous management platform, and a robust ecosystem to extend the open hypervisor and management platform, all of which oVirt is tackling.

Now KVM and the OVA just need success cases to demonstrate the technology. Initially, IBM provided the core case experience, its Research Compute Cloud (RC2). RC2 runs over 200 iDataplex nodes, an IBM x86 product using KVM. It handles 2000 concurrent instances, is used by thousands of IBM employees worldwide, and provides 100TB of block storage attached to KVM instances via a storage cloud. RC2 also handles actual IBM internal chargeback based on charges-per-VM hour across IBM.

Today IBM is using KVM with its System z blades in the zBX. It also supports KVM as a tier 1 virtualization technology with IBM System Director VMControl and Tivoli system management products.  On System x, KVM delivered 18% better virtual machine consolidation in a SPECvirt_sc2010 benchmark test.

Recently KVM was adopted by DutchCloud, the leading ISP in Netherlands. DutchCloud is a cloud-based IaaS provider. Companies choose it for QoS, reliability, and low price.

DutchCloud opted for IBM SmartCloud Provisioning as it core delivery platform across multiple server and storage nodes and KVM as the hypervisor for virtual machines. KVM offers both minimal licensing costs and the ability to support mixed (KVM and VMware) deployments.  IBM’s System Director VMControl provides heterogeneous virtual machine management. The combination of KVM and SmartCloud Provisioning enabled DutchCloud to provision hundreds of customer virtual machines in a few minutes and ensure isolation through effective multi-tenancy. And since it can communicate directly with the KVM hypervisor, it avoids the need to license additional management components.

KVM is primarily a distributed x86 Linux platform and cloud play. It may, however, make its way into IBM’s zEnterprise environments through the zBX as the hypervisor for the x86 (IBM eX5) blades residing there.

, , , , , , , , , ,

Leave a comment

Predictive Analysis on the Mainframe

Real time, high volume predictive analysis has become a hot topic with the growing interest in Big Data. The consulting firm McKinsey addresses the growth of Big Data here. McKinsey’s gurus note that advancing technologies and their swift adoption are upending traditional business models. BottomlineIT also took up Big Data in October 2011.

With that in mind, IBM has been positioning the zEnterprise, its latest mainframe, for a key role in data analysis.  To that end, it acquired SPSS and Cognos and made sure they ran on the mainframe. The growing interest in Big Data and real-time data analytics fueled by reports like that above from McKinsey only affirmed IBM’s belief that as far as data analytics goes the zEnterprise is poised to take the spotlight. This is not completely new; BottomlineIT’s sister blog, DancingDinosaur, addressed it back in October 2009.

Over the last several decades people would laugh if a CIO suggested a mainframe for data analysis beyond the standard canned system reporting.  For ad-hoc querying, multi-dimensional analysis, and data visualization you needed distributed systems running a variety of specialized GUI tools. Still, the resulting queries could take days to run.

In a recent analyst briefing, Alan Meyer, IBM’s senior manager for Data Warehousing on the System z, built the case for a different style of data analysis on the zEnterprise. He drew a picture of companies needing to make better informed decisions at the point of engagement while applications and business users increasingly are demanding the latest data faster than ever. At the same time there is no letup in pressure to lower cost, reduce complexity, and improve efficiency.

So what’s stopping companies from doing near real-time analytics and the big data thing? The culprits, according to Meyer, are duplicate data infrastructures, the complexity of integrating multiple IT environments, inconsistent security, and insufficient processing power, especially when having to handle large volumes of data fast. The old approach clearly is too slow and costly.

The zEnterprise, it turns out, is an ideal vehicle for today’s demanding analytics.  It is architected for on-demand processing through pre-installed capacity paid for only when activated and allowing the addition of processors, disk, and memory without taking the system offline.  Virtualized top to bottom, zEnterprise delivers the desired isolation while prioritization controls let you identify the most critical queries and workloads. Its industry-leading processors ensure that the most complex queries run fast, and low latency enables near real-time analysis. Finally, multiple deployment options means you can start with a low-end z114 and grow through a fully configured z196 combined with a zBX loaded with blades.

Last October the company unveiled the IBM DB2 Analytics Accelerator (IDAA), a revamped version on the Smart Analytics Optimizer available only for the zEnterprise, along with a host of other analytics tools under the smarter computing banner. But the IDAA is IBM’s analytics crown jewel. The IDAA incorporates Netezza, an analytics engine that speeds complex analytics through in-memory processing combined with a highly intelligent query optimizer. When run in conjunction with DB2 also residing on the zEnterprise, the results can be astonishing, with queries that normally require a few hours completed in just a few seconds, 1000 times faster according to some early users.

Netezza, when deployed as an appliance, streamlines database performance through hardware acceleration and optimization for deep analytics, multifaceted reporting, and complex queries. When embedded in the zEnterprise, it delivers the same kind of performance for mixed workloads—operational transaction systems, data warehouse, operational data stores, and consolidated data marts—but with the z’s extremely high availability, security, and recoverability. As a natural extension of the zEnterprise, where the data already resides in DB2 and the OLTP systems, IDAA is able to deliver pervasive analytics across the organization while further speeding performance and ease of deployment and administration.

Of course, IT has other options: EMC now offers its Greenplum data analytics appliance  and Oracle just released its Big Data Appliance. Neither requires a mainframe. But when the price for the latest general purpose mainframe starts at $75,000 and can do much more than any appliance, maybe it’s time to consider one. The value of the predictive, near real-time business analytics alone could justify it.

, , , , , ,

Leave a comment

Technology Trends for 2012

The big technology trends in 2012 will be extensions of trends that began in 2011 or earlier.  For example, BottomlineIT noted the Consumerization of IT  back in September. Expect it to pick up speed in 2012. Similarly you read about The Internet of Things here back in February. That too will drive technology trends in 2012.

The big IT research firms published their trends projections for 2012. You can find Gartner’s here.  Maybe more interesting to a CIO will be IDC’s security trends for 2012 here.

The tech trends below are based on the numerous vendor briefings and conferences BottomlineIT attends as well as talking with dozens of IT and business managers. Most shouldn’t surprise you if you have been reading BottomlineIT, but a few might.

Here are the technology trends for 2012:

BYOD—smartphones mainly and other devices. The twist is the growing adoption of Bring-Your-Own-Device (BYOD) in which workers are encouraged to bring their personal smartphones to work while IT will be asked to support a range of popular devices, selectively open interfaces to data and applications, and insist on a certain level of security, such as data encryption. The business will have to resolve reimbursement issues, currently policies vary from zero to all.

Social Networking for Business—will only grow in the coming year.  Social networking is the way the next generation of workers live and increasingly work.  Businesses will want to identify and capitalize on opportunities in social networking starting with collaboration.

The Internet of Things—the digital transformation of the economy continues as chips are embedded in more things from consumer appliances to packaging materials, allowing companies to meter and monitor processes and activity. RFID is just the start. Watch for more digital instrumentation appearing.

 Automated, Real-time Data Analytics—a part of the Big Data trend. Expect to see the growing adoption of advanced data analytics, which increasingly will be automated to keep up with the high volume and in near-real time to allow for dynamic data-based decision-making. And the analytics will be baked in, relieving the business from having to maintain a stable of PhD quants.

Bio-metric Authentication—passwords provide poor security. Watch for increased adoption of bio-metrics in the form of fingerprints, retina scans, facial/voice recognition, and such to replace the use of passwords for authentication.

The Cloud goes Mainstream—most companies will develop a cloud strategy at some level, whether for backup to the cloud, SaaS, to augment existing capabilities, or something else.

Virtualized Enterprise—look for increasing virtualization of every digital aspect of the enterprise, from data networking to voice communications.

Solid state memory for storage—in one form or another solid state memory will be an increasing part of almost every storage strategy as costs continue to drop and vendors get better at integrating it into the products to boost performance.

Further out:

Electronic Wallets—smart devices, including smartphones, used for almost anything from buying a can of soda to proving who you are. Big vendors already are fighting over who provides the e-wallet. Think you worry about security now? This merits close scrutiny.

Geo-Location—between smart devices and GPS look for businesses increasingly to take advantage of geographic data, first for marketing (combined with QR codes) and then much more.

In-memory Computing—combining processing with memory speeds performance.  Expect to see entire databases processed in memory.

Gamification—applying aspects of computer gaming to business software offers the possibility of more compelling and engaging business applications.  Could ERP be improved through gamification? For sure.

However things shake out, 2012 should be an interesting year for technology, and BottomlineIT will stay on top of it.

, , , , , , , , , , ,

Leave a comment