Posts Tagged Microsoft

PaaS Gains Cloud Momentum

Guess you could say Gartner is bullish on Platform-as-a-Service (PaaS). The research firm declares: PaaS is a fast-growing core layer of the cloud computing architecture, but the market for PaaS offerings is changing rapidly.

The other layers include Software-as-a-Service (SaaS) and Infrastructure-as-a-Service (IaaS) but before the industry build-out of cloud computing is finished (if ever), expect to see many more X-as-a-Service offerings. Already you can find Backup-as-a-Service (BaaS). Symantec, for instance, offers BaaS to service providers, who will turn around and offer it to their clients.

But the big cloud action is around PaaS. Late in November Red Hat introduced OpenShift Enterprise, an enterprise-ready PaaS product designed to be run as a private, public or hybrid cloud. OpenShift, an open source product, enables organizations to streamline and standardize developer workflows, effectively speeding the delivery of new software to the business.

Previously cloud strategies focused on SaaS, in which organizations access and run software from the cloud. Salesforce.com is probably the most familiar SaaS provider. There also has been strong interest in IaaS, through which organizations augment or even replace their in-house server and storage infrastructure with compute and storage resources from a cloud provider. Here Amazon Web Services is the best known player although it faces considerable competition that is driving down IaaS resource costs to pennies per instance.

PaaS, essentially, is an app dev/deployment and middleware play. It provides a platform (hence the name) to be used by developers in building and deploying applications to the cloud. OpenShift Enterprise does exactly that by giving developers access to a cloud-based application platform on which they can build applications to run in a cloud environment. It automates much of the provisioning and systems management of the application platform stack in a way that frees the IT team to focus on building and deploying new application functionality and not on platform housekeeping and support services. Instead, the PaaS tool takes care of it.

OpenShift Enterprise, for instance, delivers a scalable and fully configured application development, testing and hosting environment. In addition, it uses Security-Enhanced Linux (SELinux) for reliable security and multi-tenancy. It also is built on the full Red Hat open source technology stack including Red Hat Enterprise Linux, JBoss Enterprise Application Platform, and OpenShift Origin, the initial free open source PaaS offering. JBoss Enterprise Application Platform 6, a middleware tool, gives OpenShift Enterprise a Java EE 6-certified on-premise PaaS capability.  As a multi-language PaaS product, OpenShift Enterprise supports Java, Ruby, Python, PHP, and Perl. It also includes what it calls a cartridge capability to enable organizations to include their own middleware service plug-ins as Red Hat cartridges.

Conventional physical app dev is a cumbersome process entailing as many as 20 steps from idea to deployment. Make it a virtual process and you can cut the number of steps down to 14; a small improvement. As Red Hat sees it, the combination of virtualization and PaaS can cut that number of steps to six; idea, budget, code, test, launch, and scale. PaaS, in effect, shifts app dev from a craft undertaking to an automated, cloud-ready assembly line. As such, it enables faster time to market and saves money.

Although Red Hat is well along in the PaaS market and the leader in open source PaaS other vendors already are jumping in and more will be joining them. IBM has SmartCloud Application Services as its PaaS offering.  Oracle offers a PaaS product as part of the Oracle Cloud Platform. EMC offers PaaS consulting and education but not a specific technology product.  When HP identifies PaaS solutions it directs you to its partners. A recent list of the top 20 PaaS vendors identifies mainly smaller players, CA, Google, Microsoft, and Salesforece.com being the exceptions.

A recent study by IDC projects the public cloud services market to hit $98 billion by 2016. The PaaS segment, the fastest growing part, will reach about $10 billion, up from barely $1 billion in 2009. There is a lot of action in the PaaS segment, but if you are looking for the winners, according to IDC, focus on PaaS vendors that provide a comprehensive, consistent, and cost effective platform across all cloud segments (public, private, hybrid). Red Hat OpenShift clearly is one; IBM SmartCloud Application Services and Microsoft Azure certainly will make the cut. Expect others.

Advertisements

, , , , , , , , , , , ,

Leave a comment

Supercomputing for Everyone

IT shops increasingly are being drawn into high performance computing, but this is not the supercomputing of the past in which research and scientific-oriented organizations deployed massively parallel hardware presided over by armies of technocrats and computer geeks.  Supercomputing, with its ability to grapple with the most complex problems and extremely large volumes of data fast, is no longer only for large organizations in scientific and technical fields.

You only have to be unable to run a Monte Carlo simulation or two before you think a supercomputer might not be a bad thing if you could only get the use of one. The latest generation of high performance computing (HPC) systems promises to put supercomputing capabilities in the hands of even midsize and non-technical organizations. And the cloud adds the ability to harness massive numbers of processors and apply them to a single task.

The big consulting firms already are trying to capitalize on the trend. For example Accenture/Avanade is partnering with Microsoft to help deliver a wide range of advanced capabilities through Microsoft’s Azure cloud.  Similarly, Capgemini clearly has been brushing up on supercomputing trends for the future

Not coincidentally, the technology to perform HPC-style computing now is coming within the reach of conventional businesses with regular IT organizations. HPC is being delivered through compute clusters, compute grids, and increasingly via the cloud. And the compute clusters or grids can be nothing more than loosely connected Windows servers, not much different from the machines running throughout the organization.

The driver for this new-found interest in HPC is not a new mission to Mars or a sudden race to capitalize on the discovery of the Higgs boson. Behind the interest in HPC is data analytics, especially analytics of Big Data, preferably in near real time.  This requires the ability to capture, sort, filter, and correlate massive volumes of data to find worthwhile business insights.

Long-time HPC players like IBM, HP, SGI, and Dell are revamping their offerings for this new take on HPC. They are being joined by a new breed of compute-intensive analytics driven, cloud-based HPC players including Amazon’s Cluster Compute Instances, Appistry, and Microsoft’s Project Daytona, beyond whatever it does with Azure.

Not surprising, IBM has taken the lead in bringing what it now calls technical computing solutions within reach by making it complete, affordable, easy to deploy, and sufficiently scalable to accommodate workload growth and business expansion. It also aims to simplify administration through intuitive management tools that free companies to focus on business goals, not high-performance computing. In the process, it has ditched the HPC label—too geeky.

IBM is doing this mainly by bringing Platform Computing, a recent acquisition, to the HPC party. These include Platform LSF and Platform Symphony to enable up to 100% server utilization, Platform Cluster Manager, System x iDataPlex, and System Storage DCS3700 for parallel file management storage plus offerings for Big Data and cloud computing.  Previously iDataPlex was IBM’s main HPC offering.

With these platforms almost any organization can attack the same complex, multi-dimensional analytic problems that took way too long or were not even feasible with the usual corporate systems. The new generation of HPC can still handle compute-intensive supercomputing workloads, but they also can handle heavy analytic workloads and Big Data processing fast.

And they do it in ways that don’t require big investments in more technology and or the need to recruit a cadre of hardcore compute geeks. Where once supercomputing focused primarily on delivering megaflops (millions of floating point operations per second), petaflops, or even exaflops, now companies are looking to leverage affordable technical computing tools for problems that are less complicated than, say, intergalactic navigation yet still deliver important business results .

Initially HPC or supercomputing was considered the realm of large government research being conducted by secretive agencies and esoteric think tanks. Today, HPC is poised to go mainstream. Now companies in financial services, media, telecommunication, and life sciences are adopting HPC for modeling, simulations, and predictive analyses of various types. Financial services firms, for example, want real time analytics to deliver improved risk management, faster and more accurate credit valuation assessments, multi-dimensional pricing, and actuarial analyses.

While some of the work still has a distinct scientific flavor, like next-generation genomics or 3D computer modeling, others HPC activities seem like conventional business application processing. These include financial data analysis, real-time CRM, social sentiment analysis, data mining of unstructured data, and retail merchandising analysis and planning.

The role of IT will revolve around working with the business managers to identify the need and build the business case. Then IT assembles the technology from a range of off-the-shelf choices and captures and manages the data.  Welcome to the world of supercomputing for everyone.

, , , , , , , , , , , ,

Leave a comment

HP and Dell Lead Latest Rush to Cloud Services

HP last week made its first public cloud services available as a public beta. This advances the company’s Converged Cloud portfolio as the company delivers an open source-based public cloud infrastructure designed to enable developers, independent software vendors (ISVs) and enterprises of all sizes to build the next generation of web applications.

These services, HP Cloud Compute, Storage, and HP Cloud Content Delivery Network, now will be offered through a pay-as-you-go model. Designed with OpenStack technology, the open-sourced-based architecture avoids vendor lock-in, improves developer productivity, features a full stack of easy-to-use tools for faster time to code, provides access to a rich partner ecosystem, and is backed by personalized customer support.

Last week Dell also joined the cloud services rush with an SAP cloud services offering. Although Dell has been in the services business at least since its acquisition of Perot Systems a few years back services for SAP and the cloud, indeed are new, explained Burk Buechler, Dell’s service portfolio director.

Dell offers two cloud SAP services. The first is the Dell Cloud Starter Kit for SAP Solutions, which helps organizations get started on their cloud journey quickly by providing customers with 60-day access to Dell’s secure cloud environment with a compute power equivalent to 8,000 SAP Application Performance Standard (SAPS) unit of measure and is coupled with Dell’s consulting, application, and infrastructure services in support of SAP solutions.

The second is the Dell Cloud Development Kit for SAP Solutions, which provides access to 32,000 SAPS of virtual computing capacity to deploy development environments for more advanced customers who need a rich development landscape for running SAP applications. This provides a comprehensive developer environment with additional capabilities for application modernization and features industry-leading Dell Boomi technology for rapid cross-application integration.

Of the latest two initiatives, HP’s is the larger. Nearly 40 companies have announced their support for HP Cloud Services, from Platform-as-a-Service (PaaS) partners to storage, management and database providers. The rich partner ecosystem provides customers with rapid access to an expansive suite of integrated cloud solutions that offer new ways to become more agile and efficient. The partner network also provides a set of tools, best practices and support to help maximize productivity on the cloud. This ecosystem of partners is a step along the path to an HP Cloud Services Marketplace, where customers will be able to access HP Cloud Services and partner solutions through a single account.

Of course, there are many other players in this market. IBM staked out cloud services early with a variety of IBM SmartCloud offerings. Other major players include Oracle., Rackspace, Amazon’s Elastic Compute Cloud (EC2), EMC, Red Hat, Cisco, NetApp, and Microsoft.  It is probably safe to say that eventually every major IT vendor will offer cloud services capabilities. And those that don’t will have partnerships and alliances with those who do.

Going forward every organization will include a cloud component as some part of their IT environment. For some, it will represent a major component; for others cloud usage will vary as business and IT needs change. There will be no shortage of options, something to fit every need.

, , , , , , , , , ,

Leave a comment

Low-Cost Fast Path to Private Cloud

The private cloud market—built around a set of virtualized IT resources behind the organization’s firewall—is growing rapidly. Private cloud vendors have been citing the latest Forrester prediction of the private cloud market to growth to more than $15 billion in 2020. Looking at a closer horizon, IDC estimates the private cloud market will grow to $5.8 billion by 2015.

 The appeal of the private cloud comes from its residing on-premise and its ability to leverage existing IT resources wherever possible. Most importantly, the private cloud addresses the concerns of business executives about cloud security and control.

The promise of private clouds is straightforward:  more flexibility and agility from their systems, lower total costs, higher utilization of the hardware, and better utilization of the IT staff. In short organizations want all the benefits of the public cloud computing along with the security of keeping it private behind enterprise firewall.

Private clouds can do this by delivering IT as a service and freeing up IT manpower through self-service automation. The private cloud sounds simple. They don’t, however, come that easily. They require sophisticated virtualization and automation.  “Up-front costs are real, and choosing the right vendor to manage or deploy an environment is equally important,” says senior IDC analyst Katie Broderick.

IBM, however, may change the private cloud financial equation with its newest SmartCloud Entry offering based on IBM System x (x86 servers) and VMware.  The starting price is surprisingly low, under $60,000.

The IBM SmartCloud Entry starts with a flexible, modular design that can be installed quickly. It also can handle integrated management; automated provisioning through a service request catalog, approvals, metering, and billing; and do it all through a consolidated management console, a single pane of glass. The result: the delivery of standardized IT services on the fly and at a lower cost through automation. A business person, according to IBM, can self-provision services through SmartCloud Entry in four mouse clicks,.  something even a VP can handle.

The prerequisite for any private cloud is virtualized systems.  Start by consolidating and virtualizing servers, storage, and networking to reduce operating and capital expenses and streamline systems management. Virtualization is essential to achieve the flexibility and efficiency organizations want from their private cloud. They must virtualize as the first step in IBM’s SmartCloud Entry or any other private cloud.

From there you improve speed and business agility through SmartCloud Entry capabilities like automated service deployment, portal-based self-service provisioning, and simplified administration.  In short you create master images of the desired software, convert the images for use with inexpensive tools like the open source KVM hypervisor, and track the images to ensure compliance and minimize security risks. Finally you can gain efficiency by reducing both the number of images and the storage required for them. From there just deploy the software images on request through end user self-service combined with virtual machine isolation capabilities and project-level user access controls for security.

By doing this—deploying and maintaining the application images, delegating and automating the provisioning, standardizing deployment, and simplifying administration—the organization can cut the time to deliver IT capabilities through a private cloud from months to 2-3 days, actually to just hours in some cases. This is what enables business agility—the ability to respond to changes fast—with reduced costs through a more efficient operation.

At $60k the IBM x86 SmartCloud Entry offering is a good place to start although IBM has private cloud offerings for Linux and Power Systems as well. But all major IT vendors are targeting private clouds though few can deliver as much of the stack as IBM. Microsoft offers a number of private cloud solutions here. HP provides a private cloud solution for Oracle, here, while Oracle has an advanced cluster file system for private cloud storage here.  NetApp, primarily a storage vendor, has partnered with others to deliver a variety of NetApp private cloud solutions for VMware, Hyper-V, SAP, and more.

, , , , , ,

Leave a comment

The Challenge of Managing Multi-Platform Virtualization

While virtualization has been experiencing widespread adoption over the past decade it was considered an x86-VMware phenomenon. Sure there are other hypervisors, but for most organizations VMware was synonymous with virtualization. Even on the x86 platform, Microsoft Hyper-V was the also ran. Of course, virtualization has been baked into the mainframe for decades, but most organizations only began to take notice with the rise of VMware.

Virtualization provides the foundation for cloud computing, and as cloud computing gains traction across all segments of the computing landscape virtualization increasingly is understood as a multi-platform and multi-hypervisor game. Today’s enterprise is likely to be widely heterogeneous. It will run virtualized systems on x86 platforms, Windows, Linux, Power, and System z. By the end of the year, expect to see both Windows and Linux applications running virtualized on x86, Power Systems, and the zEnterprise mainframe.

Welcome to the virtualized multi-platform, multi-hypervisor enterprise. While it brings benefits—choice, flexibility, cost savings—it also comes with challenges. The biggest of which is management complexity. Growing virtualized environments have to be tightly managed or they can easily spin out of control with phantom and rogue VMs popping up everywhere and gobbling system resources. The typical platform- and hypervisor-specific tools simply won’t do the trick. This will require tools to manage virtualization across the full range of platforms and hypervisors.

Not surprisingly, IBM, which probably has the most virtualized platforms and hypervisors of any vendor, also is the first with cross-platform, cross-hypervisor management in Systems Director’s newest version of VMControl, version 2.4, part of IBM’s System Director family of management tools. This is truly multi everything management. From a single console you control VMs running on x86 Windows, x86 Linux, and Linux on Power. One administrator can start, stop, move, and otherwise manage virtual machines, even across platforms. And it is agnostic as far as the hypervisor goes; it can handle VMware, Hyper V, and KVM.  It also integrates with Microsoft System Center Configuration Manager and VMware vCenter. (I’ve been told by IBM that it also will be able to manage VMs running on the zEnterprise platform soon after a few issues are resolved regarding the mainframe’s already robust virtualization management.)

The multi-platform VMControl 2.4 dovetails nicely with another emerging virtualization trend—open virtualization. In just a few months the Open Virtualization Alliance has grown from the initial four founders (IBM, Red Hat, Intel, and HP) to over 200 members. The open source KVM hypervisor being championed by the alliance handles both Linux and Windows workloads, allowing organizations to dodge yet another element of vendor lock-in. One organization already used that flexibility to avoid higher charges by running the open source hypervisor for a test and dev situation. That kind of open virtualization requires just the kind of multi-platform virtualization management VMControl 2.4 delivers.

Multi-platform is where enterprise virtualization has to go. Eventually BottomlineIT expects the other hypervisors to get there, but it may take a while.

, , , , , , , ,

Leave a comment

VMworld Triggers a Clash of Titans

If you believe the hype about VMworld 2011, the annual virtualization fest held by VMware, the leading distributed systems hypervisor company, you might think virtualization was poised to take over the world. OK, in some ways it is, at least the IT world anyway. Virtualization is important because it has the potential to save money and change much that is inefficient with conventional IT. It also forms the foundation of cloud computing

But VMworld wasn’t the only mega-event going on at the end of August. Salesforce.com, the 900 lbs. gorilla in the SaaS industry, staged its annual Dreamforce event in San Francisco the same week. Dreamforce expected 45,000 attendees, better than doubling VMworld’s 20,000. Salesforce is using Dreamforce to rebrand itself as the social enterprise company on the basis of its cloud platform and how it leverages social, mobile, and open cloud technologies to change companies’ relationships with their customers. Dreamforce sponsors include the big consulting firms but few of the big IT vendors.

Judging by the projected attendance at this year’s VMworld it clearly was one of the two places to be this final week of summer if you’re in IT.  The list of corporate participants includes all the big names in technology—the ones not at Dreamforce–Cisco, EMC, HP, NetApp, CA, IBM, Intel, Symantec, and more; a veritable clash of titans. Here’s a sampling:

IBM is using VMworld to promote its hybrid and private Smart Cloud initiatives. In this case it announced a hybrid cloud product based on its recent Cast Iron acquisition that promises to reduce the time it takes to connect, manage and secure public and private clouds. An integration and management tool, it aims to help organizations of all sizes gain better visibility and control while effectively easing the ability to integrate and manage all of an organization’s on-and-off premise IT resources. What once took several months to set up, according to IBM, can now be done in a few days.

NetApp, a storage vendor, joined with VMware to announce the VMware cloud infrastructure on NetApp, which will allow companies to migrate to a secure cloud computing model at their own pace while avoiding the need to rip and replace their existing infrastructure. The product combines NetApp’s flexible Unified Storage Architecture and comprehensive set of storage and data management capabilities built on NetApp’s Data ONTAP with VMware’s recently enhanced cloud infrastructure suite.

Cisco announced technology enhancements to its joint VMware virtualization product that help organizations accelerate their transition to the cloud. Sound familiar? The companies unveiled network virtualization that will broaden the mobility range of virtual machines across multiple datacenters and cloud environments.

Symantec Corp. joined VMware to announce an expansion of their joint effort to define and architect Desktop-as-a-Service (DaaS) solutions with the goal to provide secure, pre-integrated, and well-managed enterprise-quality virtual desktop computing environments for both enterprises and IT service providers. This initiative will leverage VMware’s virtual desktop and cloud infrastructure products with Symantec products to deliver a secure, manageable and cost-effective DaaS solution.

Of course VMware made many announcements starting with VMware View, which enhances the company’s virtual desktop offerings and VMware Horizon, dubbed as a platform for the post PC era. Horizon handles a variety end user tasks for virtualized Windows applications and mobile users. It also used the event to launch vSphere 5, the latest enhancement to its vSphere virtualization platform for building cloud infrastructures. This will surely trigger a clash of titans as every major IT vendor brings out its cloud virtualization platform.

Not to be outdone or ignored, Microsoft, another IT titan, chose the VMworld kickoff to launch its counter initiative  through two executives who announced new reduced pricing. This was a pointed attack on VMware, which recently raised prices through a backdoor change in its pricing model that raised a howl from customers. Said the executives: This is “a great time to showcase the value of Microsoft’s cloud offerings versus those from competitors VMware and Salesforce.com.” They promise Microsoft customers 4-10 times savings over a period of one to three years. Ain’t competition great.

, , , , , ,

2 Comments

New IBM z114—a Midrange Mainframe

IBM introduced its newest mainframe in the zEnterprise family, the z114, a business class rather than enterprise class machine. With the z114IBM can now deliver a more compelling total cost of acquisition (TCA) case, giving midrange enterprises another option as they consolidate, virtualize, and migrate their sprawling server farms. This will be particularly interesting to shops running HP Itanium or Oracle/Sun servers.

The z114 comes with a $75,000 entry price. At this price, it can begin to compete with commodity high end servers on a TCA basis, especially if it is bundled with discount programs likeIBM’s System z Solution Editions and unpublicized offers from IBM Global Finance (IGF). There should be no doubt, IBM is willing to deal to win midrange workloads from other platforms.

First, the specs, speeds, and feeds:  the z114 is available in two models; a single-drawer model, the M05, and a two-drawer model, the M10, which offers additional capacity for I/O and coupling expansion and/or more specialty engines. It comes with up to 10 configurable cores, which can be designated as general purpose or specialty engine (zIIP, zAAP, IFL, ICF) or used as spares. The M10 also allows two dedicated spares as well, a first for a midrange mainframe.

The z114 uses a superscalar design that runs at 3.8 GHz, an improved cache structure, a new out-of-order execution sequence, and over 100 new hardware instructions that deliver better per-thread performance, especially for database, WebSphere, and Linux workloads. The base z114 starts at 26 MIPS but can scale to over 3100 MIPS across five central processors and the additional horsepower provided by its specialty engines.

The z114 mainly will be a consolidation play. IBM calculates that workloads from as many as 300 competitive servers can be consolidated onto a single z114. IBM figures the machine can handle workloads from 40 Oracle server cores using just three processors running Linux. And compared to the Oracle servers IBM estimates the new z114 will cost 80% less. Similarly, IBM figures that a fully configured z114 running Linux on z can create and maintain a Linux virtual server for approximately $500 per year.

As a consolidation play, the zEnterprise System will get even more interesting later this year when x86 blades supporting Windows become available. Depending on the pricing, the z114 could become a Windows consolidation play too.

Today even midrange enterprises are multi-platform shops. For this, the z114 connects to the zBX, a blade expansion cabinet, where it can integrate and manage workloads running on POWER7-based blades as well as the IBM Smart Analytics Optimizer and WebSphere DataPower blades for integrating web-based workloads. In addition, IBM promises support for Microsoft Windows on select System x server blades soon.

To achieve a low TCA, IBM clearly is ready to make deals. For example, IBM also has lowered software costs to deliver the same capacity for 5-18% less through a revised Advanced Workload License Charges (AWLC) pricing schedule.  A new processor value unit (PVU) rating on IFLs can lower Linux costs as much as 48%.

The best deal, however, usually comes through the System z Solution Edition Program, which BottomlineIT’s sister blog, DancingDinosaur, has covered here and here.  It bundles System z hardware, software, middleware, and three years of maintenance into a deeply discounted package price. Initial System Editions for the z114 will be WebSphere, Linux, and probably SAP.

IFG also can lower costs, starting with a six month payment deferral. You can acquire a z114 now but not begin paying for it until the next year. The group also is offering all IBM middleware products, mainly WebSphere Application Server and Tivoli, interest free (0%) for twelve months. Finally, IFG can lower TCA through leasing. Leasing could further reduce the cost of the z114 by up to 3.5% over three years.

By the time you’ve configured the z114 the way you want it and netted out the various discounts, even with a Solutions Edition package, it will probably cost more than $75,000. Even the most expensive HP Itanium server beats that. As soon as there are multiple servers in a consolidation play, that’s where the z114 payback lies.

, , , , , , , , , , , , , , , , , , , , ,

Leave a comment