Posts Tagged open source

PaaS Gains Cloud Momentum

Guess you could say Gartner is bullish on Platform-as-a-Service (PaaS). The research firm declares: PaaS is a fast-growing core layer of the cloud computing architecture, but the market for PaaS offerings is changing rapidly.

The other layers include Software-as-a-Service (SaaS) and Infrastructure-as-a-Service (IaaS) but before the industry build-out of cloud computing is finished (if ever), expect to see many more X-as-a-Service offerings. Already you can find Backup-as-a-Service (BaaS). Symantec, for instance, offers BaaS to service providers, who will turn around and offer it to their clients.

But the big cloud action is around PaaS. Late in November Red Hat introduced OpenShift Enterprise, an enterprise-ready PaaS product designed to be run as a private, public or hybrid cloud. OpenShift, an open source product, enables organizations to streamline and standardize developer workflows, effectively speeding the delivery of new software to the business.

Previously cloud strategies focused on SaaS, in which organizations access and run software from the cloud. Salesforce.com is probably the most familiar SaaS provider. There also has been strong interest in IaaS, through which organizations augment or even replace their in-house server and storage infrastructure with compute and storage resources from a cloud provider. Here Amazon Web Services is the best known player although it faces considerable competition that is driving down IaaS resource costs to pennies per instance.

PaaS, essentially, is an app dev/deployment and middleware play. It provides a platform (hence the name) to be used by developers in building and deploying applications to the cloud. OpenShift Enterprise does exactly that by giving developers access to a cloud-based application platform on which they can build applications to run in a cloud environment. It automates much of the provisioning and systems management of the application platform stack in a way that frees the IT team to focus on building and deploying new application functionality and not on platform housekeeping and support services. Instead, the PaaS tool takes care of it.

OpenShift Enterprise, for instance, delivers a scalable and fully configured application development, testing and hosting environment. In addition, it uses Security-Enhanced Linux (SELinux) for reliable security and multi-tenancy. It also is built on the full Red Hat open source technology stack including Red Hat Enterprise Linux, JBoss Enterprise Application Platform, and OpenShift Origin, the initial free open source PaaS offering. JBoss Enterprise Application Platform 6, a middleware tool, gives OpenShift Enterprise a Java EE 6-certified on-premise PaaS capability.  As a multi-language PaaS product, OpenShift Enterprise supports Java, Ruby, Python, PHP, and Perl. It also includes what it calls a cartridge capability to enable organizations to include their own middleware service plug-ins as Red Hat cartridges.

Conventional physical app dev is a cumbersome process entailing as many as 20 steps from idea to deployment. Make it a virtual process and you can cut the number of steps down to 14; a small improvement. As Red Hat sees it, the combination of virtualization and PaaS can cut that number of steps to six; idea, budget, code, test, launch, and scale. PaaS, in effect, shifts app dev from a craft undertaking to an automated, cloud-ready assembly line. As such, it enables faster time to market and saves money.

Although Red Hat is well along in the PaaS market and the leader in open source PaaS other vendors already are jumping in and more will be joining them. IBM has SmartCloud Application Services as its PaaS offering.  Oracle offers a PaaS product as part of the Oracle Cloud Platform. EMC offers PaaS consulting and education but not a specific technology product.  When HP identifies PaaS solutions it directs you to its partners. A recent list of the top 20 PaaS vendors identifies mainly smaller players, CA, Google, Microsoft, and Salesforece.com being the exceptions.

A recent study by IDC projects the public cloud services market to hit $98 billion by 2016. The PaaS segment, the fastest growing part, will reach about $10 billion, up from barely $1 billion in 2009. There is a lot of action in the PaaS segment, but if you are looking for the winners, according to IDC, focus on PaaS vendors that provide a comprehensive, consistent, and cost effective platform across all cloud segments (public, private, hybrid). Red Hat OpenShift clearly is one; IBM SmartCloud Application Services and Microsoft Azure certainly will make the cut. Expect others.

Advertisements

, , , , , , , , , , , ,

Leave a comment

Open Source Virtualization Saves Money

Virtualization is a powerful technology that enables numerous benefits detailed here, particularly saving money. The savings come mainly through the IT resource consolidation. When you add open source to the virtualization equation, it creates another avenue to savings.

 Virtualization technology, in the form of the hypervisor, is not exactly cheap. VMware, the industry’s 900lbs. hypervisor gorilla commands significant license fees. With its latest pricing plan introduced last year, the standard license starts at about $1000. Enterprise costs are based on processor sockets and memory and, given how it is calculated, VMware can require four times as many licenses as previously needed, which dramatically increases the cost.  Here’s VMware’s FAQ on pricing. Depending on the amount of memory licensing costs could run into the tens of thousands of dollars.

Open source virtualization, noted Jean Staten Healy, IBM’s worldwide Cross-IBM Linux and Open Virtualization Director, presents opportunities to reduce virtualization costs in numerous ways.  For example, the inclusion of open source KVM in enterprise Linux distributions reduces need for additional hypervisors, enabling the organization to avoid buying more VMware licenses.  KVM also enables higher virtual machine density for more savings. IDC’s Al Gillen and Gary Chen put out a white paper detailing the recent KVM advances.

 The ability to manage mixed KVM- VMware virtualization through a single tool further increases the cost efficiency of open source virtualization. IBM’s System Director VMControl is one of the few tools providing such mixed hypervisor, cross-platform management.  For general hypervisor management, Linux and KVM standardized on libvirt and libguestfs as the base APIs for managing virtualization and images. These APIs work with other Linux hypervisors beyond KVM (higher-level tools, such as virsh and virt-manager, are built on top of libvirt).

The combination of KVM technical advances, the slow but steadily increasing adoption of KVM, and the inclusion of KVM as a core feature of the Linux operation system is driving more enterprises deploy KVM along with VMware. Of course, the fact that KVM now comes for free as part of the Linux core means you can try it at no cost and minimal risk.

Enterprise Linux users are now using KVM where they previously would have not bothered to virtualize a particular workload due to cost. This makes sense for several reasons; free being just one of them. Other reasons include the integration of the KVM toolset with the Linux toolset and the fact that Linux admins already know how to use it.

One large bank used Linux and KVM as a development and test resource in a private cloud. Normally, they would have needed to request more budget for VMware but since they had Linux with KVM they could just add Windows virtual machines. And by developing in Java, they could roll out prototypes fast.  In the process, the bank achieved high virtual machine density at minimal cost.

Another financial services firm set up virtual machines with KVM to monitor Linux usage for under-utilized hosts and then deployed virtual machine images to the host as warranted. The result: an ad-hoc grid of KVM virtual machines with high utilization, again at minimal cost.

KVM is a natural for private clouds. IBM reports private clouds being built using Moab with xCAT and KVM. The resulting private cloud handles both VMware and KVM equally well, making them plug-compatible.  With this approach, organizations can gradually expand their use of KVM and reduce, or at least delay, the need to buy more VMware licenses, again saving money.

KVM also is being exercised in a big way as the hypervisor behind IBM’s public Smart Cloud Enterprise, demonstrating how enterprise-capable this free, open source hypervisor is.

BottomlineIT expects VMware will remain the dominant x86 virtualization platform going forward. However, it makes sense to grab every opportunity to use KVM for enterprise-class, multi-platform virtualization and save money wherever you can.

, , , , ,

Leave a comment

OVA and oVIRT Drive KVM Success

In the x86 world VMware is the 900-pound hypervisor gorilla. Even Microsoft’s Hyper-V takes a back seat to the VMware hypervisor.  KVM, however, is gaining traction as an open source alternative. Like an open source product, it brings advantages portability, customizability, and low cost.

In terms of overall platform virtualization, the Linux world may be lagging behind Windows in the rate of server virtualization or not, depending on which studies you have been reading.  Regardless, with IBM and Red Hat getting behind the KVM hypervisor in a big way last year, the pace of Linux servers being virtualized should pick up.

The driving of KVM today is being turned over to the Open Virtualization Alliance (OVA), which has made significant gains in attracting participation since its launch last spring. Currently it boasts over 240 members, up from the couple of dozen when BottomlineIT looked at it months ago.

The OVA also has been bolstered by an open virtualization development organization, the oVirt Project here.  Its founding partners include: Canonical, Cisco, IBM, Intel, NetApp, Red Hat and SUSE. The founders promise to deliver a truly open source and openly governed and integrated virtualization stack.  The oVirt team aims to deliver both a cohesive stack and discretely reusable components for open virtualization management, which should become key building blocks for private and public cloud deployments.

 The oVirt Project bills itself as an open virtualization project providing a feature-rich server virtualization management system with advanced capabilities for hosts and guests, including high availability, live migration, storage management, system scheduler, and more. The oVirt goal is to develop a broad ecosystem of tools to make up a complete integrated platform and to deliver them on a well defined release schedule. These are components designed and tested to work together, and oVirt should become a central venue for user and developer cooperation.

The idea around OVA and oVirt is that effective enterprise virtualization requires more than just a hypervisor, noted Jean Staten, IBM Director, Worldwide Cross‐IBM Linux and Open Virtualization, at a recent briefing.  In addition to a feature-rich hypervisor like KVM, Healy cited the need for well-defined APIs at all layers of the stack, readily accessible (reasonably priced) systems and tools, a corresponding feature-rich, heterogeneous management platform, and a robust ecosystem to extend the open hypervisor and management platform, all of which oVirt is tackling.

Now KVM and the OVA just need success cases to demonstrate the technology. Initially, IBM provided the core case experience, its Research Compute Cloud (RC2). RC2 runs over 200 iDataplex nodes, an IBM x86 product using KVM. It handles 2000 concurrent instances, is used by thousands of IBM employees worldwide, and provides 100TB of block storage attached to KVM instances via a storage cloud. RC2 also handles actual IBM internal chargeback based on charges-per-VM hour across IBM.

Today IBM is using KVM with its System z blades in the zBX. It also supports KVM as a tier 1 virtualization technology with IBM System Director VMControl and Tivoli system management products.  On System x, KVM delivered 18% better virtual machine consolidation in a SPECvirt_sc2010 benchmark test.

Recently KVM was adopted by DutchCloud, the leading ISP in Netherlands. DutchCloud is a cloud-based IaaS provider. Companies choose it for QoS, reliability, and low price.

DutchCloud opted for IBM SmartCloud Provisioning as it core delivery platform across multiple server and storage nodes and KVM as the hypervisor for virtual machines. KVM offers both minimal licensing costs and the ability to support mixed (KVM and VMware) deployments.  IBM’s System Director VMControl provides heterogeneous virtual machine management. The combination of KVM and SmartCloud Provisioning enabled DutchCloud to provision hundreds of customer virtual machines in a few minutes and ensure isolation through effective multi-tenancy. And since it can communicate directly with the KVM hypervisor, it avoids the need to license additional management components.

KVM is primarily a distributed x86 Linux platform and cloud play. It may, however, make its way into IBM’s zEnterprise environments through the zBX as the hypervisor for the x86 (IBM eX5) blades residing there.

, , , , , , , , , ,

Leave a comment

Ways to Lower IT Costs

With the end of NEON’s zPrime, mainframe users lost an effective way to lower costs. And make no mistake about it; zPrime was effective in lowering costs. A data center manager in France told BottomlineIT that zPrime saved his company almost 1 billion Euros each year.

There was no magic to how zPrime achieved these savings. Mainframe software licensing costs and various other charges are reduced when the processing is handled by a special CPU that is treated differently when calculating licensing costs than if it was the general processor. The zPrime trick simply expanded the range of workloads that could be run on special processors far beyond what IBM approved. No surprise that IBM shut it down.

Every IT shop wants to reduce costs, especially these days. There are a number of ways to do so. Again, no magic. They involve reviving well known practices that many organizations have gotten away from in recent years.

Start by negotiating better software licensing deals. Many IT managers believe they already negotiate the best prices from software vendors. Repeated studies by Minneapolis-based ISAM, however, show that not to be the case.  When looking at the software tactics of best-in-class IT shops ISAM found considerable variation in software vendor pricing, and many shops simply don’t get the best deals.

When shopping for best software pricing, make sure to consider open source options too. Open source software, even with the various fees involved, costs less than conventional software licensing.

While you’re at it, check out the Software-as-a-Service (SaaS) options. Particularly for small and midsize organizations, SaaS may offer substantial savings over on-premise software licensing. The savings come from the economies of scale and from being a shared service.

Another option for reducing software costs is application performance management (APM). Where software is licensed based on the processor, anything that minimizes CPU consumption can save money. For these situations, APM revolves around proven best practices to minimize CPU resource consumption, especially during peak times. It involves both rescheduling when applications run and optimizing the code to run more efficiently.

“APM starts with profiling and understanding the way your applications use mainframe resources as they run—especially CPU. It helps determine whether they really need all the resources they are using and with this information you can then make focused tuning efforts in specific areas of software code within the applications and especially the database calls, which tend to use a lot of resources. It can reduce the CPU requirements to run your applications by an enormous percentage,” explains Philip Mann, a principal consultant at Macro 4, an APM consulting firm and tool provider.

Using the Macro 4 approach and tools, British retailer Tesco was able to reduce MIPS consumption 10-15% in one project, which allowed it to avoid purchasing extra CPU capacity. The Macro 4 tool enabled Tesco to identify opportunities where changes to databases, systems software, and applications could generate CPU savings.

Finally, organizations are trying to reduce IT costs through consolidation based on server virtualization. Some recent studies, however, suggest that many organizations are not getting the savings they expected from virtualization.  Although the potential for serious savings is still there it just may take a little more effort to realize them.

A recent survey by CA Technologies on the state of IT automation shows that 60% of managers at midsize and large enterprises are disappointed in virtualization’s ability to deliver savings. The survey quotes one respondent: “Virtualization is a bean counter’s dream, but it can be an operational nightmare.” The respondent, a senior IT manager, continued: “Change management is a huge overhead, as any changes need to be accepted by all applications and users sharing the same virtualization kit. While many organizations are seeing benefits from virtualization, such as reduced hardware spending and improved server utilization, these benefits often get overshadowed by the lack of productivity improvements in data center staffing and operations.”

The key to achieving virtualization savings is automation. The CA survey shows a direct correlation between IT service automation in a virtualized environment and cost-savings. For example, 44% of survey respondents who said most of their server provisioning processes are automated report they have significantly reduced costs through virtualization. Conversely, 48% of those who said the complexities of virtualization have introduced new costs also said, not surprisingly, most of their server provisioning processes still are manual.

OK, none of these techniques, except maybe the virtualization/automation combination, will likely save you $1 billion a year. But, when budgets are tight any savings will help.

, , , , , , ,

Leave a comment

Hadoop Aims for the Enterprise

Hadoop, the data storage and retrieval approach developed by Google to handle its massive data needs, is coming to the enterprise data center. Are you interested?

Behind Hadoop is MapReduce, a programming model and software framework that enables the creation of applications able to rapidly process vast amounts of data in parallel on large clusters of compute nodes. Hadoop is an open source project of the Apache Software Foundation and can be found here.

Specifically, Hadoop offers a framework for running applications on large clusters built from commodity hardware. It uses a style of processing called Map/Reduce, which, as Apache explains it, divides an application into many small fragments of work, each of which may be executed on any node in the cluster. A key part of Hadoop is the Hadoop Distributed File System (HDFS), which reliably stores very large files across nodes in the cluster. Both Map/Reduce and HDFS are designed so that node failures are automatically handled by the framework. Hadoop nodes consist of a server with storage.

Hadoop moves computation to the data itself. Computation consists of a map phase, which produces a sorted key and value pairs, and a reduce phase. According to IBM, a distributor of Hadoop, data is initially processed by map functions, which run in parallel across the cluster. The reduce phase aggregates and reduces the map results and completes the job.

HDFS breaks stored data into large blocks and replicates it across the cluster, providing highly available parallel processing and redundancy for both the data and the jobs. Hadoop distributions provide a set of base class libraries for writing Map/Reduce jobs and interacting with HDFS.

The attraction of Hadoop is its ability to find and retrieve data fast from vast unstructured volumes and its resilience. Hadoop, or some variation of it, is critical for massive websites like Google, Facebook, Yahoo and others. It also is a component is IBM’s Watson. But where would Hadoop play in the enterprise?

Cloudera (www.cloudera.com) has staked out its position as a provider of Apache Hadoop for the enterprise. It primarily targets companies in financial services, Web, telecommunications, and government with Cloudera Enterprise. It includes the tools, platform, and services necessary to use Hadoop in an enterprise production environment, ideally within what amounts to a private cloud.

But there are other players plying the enterprise Hadoop waters. IBM offers its own Hadoop distribution. So does Yahoo. You also can get it directly from the Hadoop Apache community.

So what are those enterprise Hadoop applications likely to be. A few come immediately to mind:

  • Large scale analytics
  • Processing of massive amounts of sensor or surveillance data
  • Private clouds running social media-like applications
  • Fraud applications that must analyze massive amounts of dynamic data fast

Hadoop is like other new technologies that emerge. Did your organization know what it might do with the Web, rich media, solid state disk, or the cloud when they first appeared? Not likely, but it probably knows now. It will be the same with Hadoop.

 

, , , , , ,

Leave a comment