Archive for May, 2011

Open Source KVM Takes on the Hypervisor Leaders

The hypervisor—software that allocates and manages virtualized system resources—usually is the first thing that comes to mind when virtualization comes up. And when IT considers server virtualization the first option typically is VMware ESX, followed by Microsoft’s Hyper-V.

But that shouldn’t be the whole story. Even in the Windows/Intel world there are other hypervisors, such as Citrix Xen.  And IBM has had hypervisor technology for its mainframes for decades and for its Power systems since the late 1990s. A mainframe (System z) running IBM’s System z hypervisor, z/VM, can handle over 1000 virtual machines while delivering top performance and reliability.

So, it was significant when IBM announced in early May that it and Red Hat, an open source technology leader, are working together to make products built around the Kernel-based Virtual Machine (KVM) open source hypervisor for the enterprise. Jean Staten Healy, IBM’s Director of Worldwide Cross-IBM Linux, told IT industry analysts that the two companies together are committed to driving adoption of the open source virtualization technology through joint development projects and enablement of the KVM ecosystem.

Differentiating this approach from those taken by the current x86 virtualization leaders VMware and Microsoft is open source technology. An open source approach to virtualization, Healy noted, lowers costs, enables greater interoperability, and increases options through multiple sources.

The KVM open source hypervisor allows a business to create multiple virtual versions of Linux and Windows environments on the same server. Larger enterprises can take KVM-based products and combine them with comprehensive management capabilities to create highly scalable and reliable, fully cloud-capable systems that enable the consolidation and sharing of massive numbers of virtualized applications and servers.

Red Hat Enterprise Virtualization, for example, was designed for large scale datacenter virtualization by pairing its centralized virtualization management system and advanced features with the KVM hypervisor. BottomlineIT looked at the Red Hat open source approach a few weeks ago, here.

The open source approach to virtualization also is starting to gain traction. To that end Red Hat, IBM, BMC, HP, Intel, and others joined to form the Open Virtualization Alliance. Its goal is to facilitate  the adoption of open virtualization technologies, especially KVM. It intends do this by promoting examples of customer successes, encourage interoperability, and accelerate the expansion of the ecosystem of third party solutions around KVM. A growing and robust ecosystem around KVM is essential if the open source hypervisor is to effectively rival VMware and Microsoft.

Healy introduced what might be considered the Alliance’s first KVM enterprise-scale success story, IBM’s own Research Compute Cloud (RC2), which is the first large-scale cloud deployed within IBM. In addition to being a proving ground for KVM, RC2 also handles actual IBM internal chargeback based on charges-per-VM hour across IBM. That’s real business work.

RC2 runs over 200 iDataplex nodes, an IBM x86 product, using KVM (90% memory utilization/node). It runs 2000 concurrent instances, is used by thousands of IBM employees worldwide, and provides 100TB of block storage attached to KVM instances via a storage cloud.

KVM was chosen not only to demonstrate the open source hypervisor but because it was particularly well suited to the enterprise challenge. It provides a predictable and familiar environment that required no additional skills, auditable security compliance, and the open source licensing model that kept costs down and would prove cost-effective for large-scale cloud use, which won’t be long in coming. The RC2 team, it seems, already is preparing live migration plans for support of federated clouds. Stay tuned.

Advertisements

, , , , , , , , , , , , , , , ,

1 Comment

Readying a Private Cloud

Cloud computing dominates the IT conversation today, but what type of cloud—public, private, or hybrid—is right for your organization and how do you get there? “It had been clear that the hybrid approach to cloud computing is of most interest for enterprises,” writes Czaroma Roman in Business Week earlier this month.

Or maybe not. According to IDC: As IT executives look for ways to systematically reduce costs, provide faster time to value, and improve reliability, they are turning to cloud computing and the development of private cloud capabilities from within their datacenters. Click here and scroll down to access the IDC report.

An IBM cloud workload adoption study found 60% of respondents were adopting private clouds while 30% were opting for public clouds for certain workloads. Driving this interest in cloud computing is the need for a more flexible IT infrastructure and a new IT delivery model inspired by consumer Internet services, notes Andy Wachs, an IBM system software manager.

Wachs, here, lays out a simple progression for any company’s journey into the cloud. It starts with server, storage, and network virtualization. To achieve the efficiency and flexibility inherent in a private cloud, however, those IT resources must be virtualized. Without that you can’t move forward.

Wachs’ progression continues with a variety of management processes. These include provisioning, monitoring, workflow orchestration, and tracking/metering resource usage. Behind all this management lies automation. Private clouds quickly become too complex to manage manually, especially as the organization progresses to self-service on demand. For this automation is crucial.

Cloud-based self-service on demand means a business manager preparing to launch a new business initiative can simply access a catalog using a browser and check off the IT resources that must be provisioned for the initiative. This eliminates the delays and wrestling match with IT that otherwise occurs whenever IT resources must be requested and set up. The business manager finishes up by checking off the attributes wanted for the requested IT resources, attributes around performance, data protection, availability, and such and then clicks DONE. Ideally, the requested resources, after an automated governance check, materialize in the private cloud properly configured and ready for use within hours if not minutes—at least this is the goal.

As expected, IBM offers services and products at each step of the journey. The main pieces are IBM CloudBurst, a private cloud in a box, and Systems Director and Tivoli management tools.

Other vendors have moved into the cloud area too. HP offers its CloudStart, a fast deployment cloud product.

Open source Red Hat offers a cloud Infrastructure-as-a-Service (IaaS) product called CloudForms. The product, now in beta, was demonstrated recently in Boston. It is expected to ship this fall.

Even small vendors are getting involved. Zyrion offers Traverse, a virtualization and cloud monitoring tool.

No matter how you journey to the cloud security always should be a focus. Private clouds have more manageable security challenges since they exist behind the firewall. Still, pay close attention to security and governance.

 As IDC points out, private clouds present an opportunity to accelerate the shift to a more automated, self-service form of computing. This not only enables organizations to reduce costs and boost IT utilization but to better match provisioning IT resources with the speed at which businesses need to move these days. Nobody can wait months for the IT resources needed to support a new business initiative anymore.

, , , , , , , , , ,

Leave a comment

Software Problem Solving for Private Clouds

First fault software problem solving (FFSPS) is an old mainframe approach that calls for solving problems as soon as they occur. It’s an approach that has gone out of favor except in classic mainframe data centers, but it may be worth reviving as the IT industry moves toward cloud computing and especially private clouds, for which the zEnterprise (z196 and zBX) is particularly well suited.

The point of Dan Skwire’s book First Fault Software Problem Solving: Guide for Engineers, Managers, and Users, is that FFSPS is an effective approach even today. Troubleshooting after a problem has occurred is time-consuming, more costly, inefficient, and often unsuccessful. Complicating troubleshooting typically is lack of information. As Skwire  notes: if you have to start troubleshooting after the problem occurs, the odds indicate you will not solve the problem, and along the way, you consume valuable time, extra hardware and software, and other measurable resources.

The FFSPS trick is to capture problem solving data from the start. This is what mainframe data centers did routinely. Specially, they used trace tables and included recovery routines. This continues to be the case with z/OS today. Full disclosure: I’m a fan of mainframe computers and Power Systems and follow both regularly in my independent mainframe blog, DancingDinosaur.

So why should IT managers today care about mainframe disciplines like FFSPS? Skwire’s answer: there surely will be greater customer satisfaction if you solve and repair the customer‘s problem, or if he is empowered to solve and repair his own problem rapidly. Another reason is risk minimization.

Skwire also likes to talk about System Yuk. You probably have a few System Yuks in your shop. What’s System Yuk? As Skwire explains, System YUK is very complex. It makes many decisions, and analyzes much data. However, the only means it has of conveying an error is the single message to the operator console: SYSTEM HAS DETECTED AN ERROR, which is not particularly helpful. System YUK has no trace table or FFSPS tools. To diagnose problems in YUK you must re-create the environment in your YUK test-bed, and add instrumentation (write statements, traces, etc) and various tools to get a decent explanation of problems with YUK, or setup some second-fault tool to capture more and better data on the production System YUK, which is high risk.

Toward the end of the book Skwire gets into what you can do about System Yuk. It amounts to a call for defensive programming. He then introduces a variety of tools to troubleshoot and fix software problems. These include: ServiceLink by Axeda, AlarmPoint Systems, LogLogic, IBM Tivoli Performance Analyzer, and CA Technologies’ Wily Introscope.

With the industry gravitating toward private clouds as a way to efficiently deliver IT as a flexible service, the disciplined methodologies that continue to keep the mainframe a still critical platform in large enterprises will be worth adopting.  FFSPS should be one in particular to keep in mind.

, , , , , ,

Leave a comment

Mitigate Cloud Risk Through Open Source

The drumbeat of cloud computing has become so loud that no business manager can avoid its siren song of lower cost, greater business agility, and the perfect alignment of business and IT. Although much of cloud computing is based on open source technologies, namely Linux, it hasn’t been viewed as an open source phenomenon.

Cloud computing has proven easier said than done. Even before the recent Amazon cloud disaster when hundreds of Amazon Elastic Cloud Computing (EC2) customers lost access to their applications and data for most of a day or longer, technology vendors had been scrambling to make cloud computing easier to deploy and use.

Red Hat, the large open source Linux provider, is the latest to launch a series of cloud technologies that promise to mitigate the risk of deploying applications to the cloud.   IBM, HP, Microsoft, EMC, Dell and others have their own initiatives aimed at doing the same thing. The Red Hat initiatives, as you’d expect, make extensive use of open source tools and frameworks to simplify the development and deployment of cloud systems and reduce the risk involved.

In some ways cloud computing appears remarkably simple. Just take a SaaS application like Salesforce.com, which is delivered via the cloud. All you need is a browser, deployment can be fast and easy, and the costs are reasonable and predictable.

Things get more complicated, however, when you want to start mixing and matching various cloud services and SaaS applications. Or you want to combine private and public cloud capabilities in a private cloud, creating what amounts to a hybrid cloud, and then build and deploy some of your own applications along with the cloud components. Of course, you’ll want to integrate and manage it all as a single system for efficiency.

Well, that’s not so easy. It can be done but you have to overcome the understandable tendency of vendors to lock you into their particular way of doing things. You end up with a lot of piece parts that don’t necessarily work together, at least not without a lot of cobbling on your part. This is where open source can help.

Earlier this week, Red Hat took a major step in enabling organizations to simplify cloud development and deployment and reduce risk. It introduced a platform-as-a-service (PaaS) offering called OpenShift. It is aimed at open source developers and provides them with a flexible platform for developing cloud applications using a choice of development frameworks for Java, Python, PHP and Ruby, including Spring, Seam, Weld, CDI, Rails, Rack, Symfony, Zend Framework, Twisted, Django and Java EE. It is based on a cloud interoperability standard, Deltacloud, and it promises to end PaaS lock-in, allowing developers to choose not only the languages and frameworks they use but the cloud provider upon which their application will run.

By building on the Deltacloud cloud interoperability standard, OpenShift allows developers to run their applications on any supported Red Hat Certified Public Cloud Provider, eliminating the lock-in associated with first-generation PaaS vendors. In addition it brings the JBoss middleware services to the PaaS experience, such as the MongoDB services and other RHEL services.

At the same conference, Red Hat introduced CloudForms, a product for creating and managing IaaS for private and hybrid clouds. It allows users to create integrated clouds consisting of a variety of computing resources and still be portable across physical, virtual and cloud computing resources.  CloudForms addresses key problems encountered in first-generation cloud products: the cost and complexity of virtual server sprawl, compliance nightmares and security concerns.

One key benefit of CloudForms is the ability to create hybrid clouds using existing computing resources: virtual servers from different vendors, such as Red Hat and VMware; different cloud vendors, such as IBM and Amazon; and conventional in-house or hosted physical server, both racks and blades. This level of choice helps to eliminate lock-in and the need to undergo migration from physical to virtual servers in order to obtain the benefits of cloud.

Other vendors also have introduced new cloud initiatives recently. IBM, for example, demonstrated an enterprise cloud service delivery platform that it is piloting with key clients.  It promises to allow enterprise clients to select key characteristics of a public, private, and hybrid cloud to match to their workload requirements from simple Web infrastructure to complex business processes. These characteristics fall along five risk dimensions: security, performance/availability, technology platform, management/deployment, and payment/billing.

HP has joined with Red Hat in what is being called the Red Hat Cloud-HP Edition.  This is a private cloud design and reference architecture for IAAS and combines Red Hat Cloud solutions with HP’s CloudSystem, Cloud Maps and associated services.

Add to the above what Dell, Microsoft, EMC, and others are doing with initiatives to simplify and streamline business use of the cloud and it becomes clear that the vendors have gotten the message: Businesses want cloud computing that delivers what it promised—open, flexible, reliable, and efficient computing. It will take a few years to build it out, but it just got a big boost.

, , , , , , , , , , , , , ,

1 Comment