Posts Tagged virtualization

Best TCO—System z vs. x86 vs. Public Cloud

IBM recently analyzed various likely customer workload scenarios and found that the System z as an enterprise Linux server could consistently beat x86 machines and even public cloud providers like AWS in terms of TCO.  The analysis was reasonably evenhanded although, like automobile mileage ratings, your actual results may vary.

This blogger has long contended that the z Enterprise Linux Server acquired under the deeply discounted IBM System z Solution Edition program could beat comparable x86 systems not only in terms of TCO but even TCA. Algar, a Brazilian telecom, acquired its initial zEnterprise Linux server to consolidate a slew of x86 systems and lay a foundation for scalable growth. It reports cutting data center costs by 70%. Nationwide Insurance, no newcomer to mainframe computing, used the zEnterprise to consolidate Linux servers, achieving $46 million in savings.

The point: the latest IBM TCO analyses confirm what IBM has been saying. TCO advantage, IBM found, switches to the z Enterprise Linux Server at around 200 virtual servers compared to the public cloud and a bit more VMs compared to x86 machines. View the IBM z TCO presentation here.

IBM further advanced its cause in the TCO/TCA battle with the recent introduction of the IBM Enterprise Cloud System. This is a factory-built and integrated system—processor, memory, network, IFLs, virtualization management, cloud management, hypervisor, disk orchestration, Linux OS—priced (discounted) as a single solution. IBM promises to deliver it in 45 days and have it production ready within hours of hitting the organization’s loading dock. Of course, it comes with the scalability, availability, security, manageability, etc. long associated with the z, and IBM reports it can scale to 6000 VMs. Not sure how this compares in price to a Solution Edition Enterprise Linux Server.

The IBM TCO analysis compared the public cloud, x86 cloud, and the Enterprise Cloud System in terms power and space, labor, software/middleware, and hardware costs when running 48 diverse (a range of low, medium, and high I/O) workloads. In general it found an advantage for the z Enterprise Cloud System of 34-73%.  The z cost considerably more in terms of hardware but it more than made up for it in terms of software, labor, and power. Overall, the TCO examined more than 30 cost variables, ranging from blade/IFL/memory/storage amounts to hypervisor/cloud management/middleware maintenance.

In terms of hardware, the z included the Enterprise Linux Server, storage, z/VM, and IBM Wave for z/VM. Software included WebSphere Application Server middleware, Cloud Management Suite for z, and Tivoli for z/VM. The x86 cloud included HP hardware with a hypervisor, WebSphere Application Server, SmartCloud Orchestrator, SmartCloud Monitoring, and Tivoli Storage Manager EE. Both analyses included labor to manage both hardware and VMs, power and space costs, and SUSE Linux.

The public cloud assumptions were a little different. Each workload was deployed as a separate instance. The pricing model was for reserved instances. Hardware costs were based on instances in east US region with SUSE, EBS volume, data in/out, support (enterprise), free and reserved tier discounts applied. Software costs included WebSphere Application Server ND (middleware) costs for instances. A labor cost was included for managing instance.

When IBM applied its analysis to 398 I/O-diverse workloads the results were similar, 49-75% lower cost with the Cloud System on z. Again, z hardware was considerably more costly than either x86 or the public cloud. But z software and labor was far less than the others. In terms of 3-year TCO, the cloud was the highest at $37 M, x86 came in at $18.3 M, and the Cloud on z cost $9.4 M. With 48 workloads, the z again came in with lowest TCO at $1 M compared to $1.6 M for x86 systems, and $3.9 M for the public cloud.

IBM tried to keep the assumptions equivalent across the platforms. If you make different software or middleware choices or a different mix of high-mid-low I/O workloads your results will be different but the rankings probably won’t change all that much.

Also, there still is time to register for IBM Edge2014 in Las Vegas. This blogger will be there hanging around the bloggers lounge when not attending sessions. Please join me there.

Follow Alan Radding/BottomlineIT on Twitter: @mainframeblog

, , , , , , , , , , , , , , , , , , , , , , , , ,

Leave a comment

Change-proof Your Organization

Many organizations are being whiplashed by IT infrastructure change—costly, disruptive never changes that are hindering IT and the organization.  You know the drivers: demand for cloud computing, mobile, social, big data, real-time analytics, and collaboration. Don’t forget to add soaring transaction volumes, escalating amounts of data, 24x7x365 processing, new types of data, proliferating forms of storage, incessant compliance mandates, and more keep driving change. And there is no letup in sight.

IBM started to articulate this in a blog post, Infrastructure Matters. IBM was focusing on cloud and data, but the issues go even further. It is really about change-proofing, not just IT but the business itself.

All of these trends put great pressure on the organization, which forces IT to repeatedly tweak the infrastructure or otherwise revamp systems. This is costly and disruptive not just to IT but to the organization.

In short, you need to change-proof your IT infrastructure and your organization.  And you have to do it economically and in a way you can efficiently sustain over time. The trick is to leverage some of the very same  technology trends creating change to design an IT infrastructure that can smoothly accommodate changes both known and unknown. Many of these we have discussed in BottomlineIT previously:

  • Cloud computing
  • Virtualization
  • Software defined everything
  • Open standards
  • Open APIs
  • Hybrid computing
  • Embedded intelligence

These technologies will allow you to change your infrastructure at will, changing your systems in any variety of ways, often with just a few clicks or tweaks to code.  In the process, you can eliminate vendor lock-in and obsolete, rigid hardware and software that has distorted your IT budget, constrained your options, and increased your risks.

Let’s start by looking at just the first three listed above. As noted above, all of these have been discussed in BottomlineIT and you can be sure they will come up again.

You probably are using aspects of cloud computing to one extent or another. There are numerous benefits to cloud computing but for the purposes of infrastructure change-proofing only three matter:  1) the ability to access IT resources on demand, 2) the ability to change and remove those resources as needed, and 3) flexible pricing models that eliminate the upfront capital investment in favor of paying for resources as you use them.

Yes, there are drawbacks to cloud computing. Security remains a concern although increasingly it is becoming just another manageable risk. Service delivery reliability remains a concern although this too is a manageable risk as organizations learn to work with multiple service providers and arrange for multiple links and access points to those providers.

Virtualization remains the foundational technology behind the cloud. Virtualization makes it possible to deploy multiple images of systems and applications quickly and easily as needed, often in response to widely varying levels of service demand.

Software defined everything also makes extensive use of virtualization. It inserts a virtualization layer between the applications and the underlying infrastructure hardware.  Through this layer the organization gains programmatic control of the software defined components. Most frequently we hear about software defined networks that you can control, manage, and reconfigure through software running on a console regardless of which networking equipment is in place.  Software defined storage gives you similar control over storage, again generally independent of the underlying storage array or device.

All these technologies exist today at different stages of maturity. Start planning how to use them to take control of IT infrastructure change. The world keeps changing and the IT infrastructures of many enterprises are groaning under the pressure. Change-proofing your IT infrastructure is your best chance of keeping up.

, , , , , , , , , , , , , , ,

1 Comment

Big Data and Analytics as Game Changing Technology

If you ever doubted that big data was going to become important, there should be no doubt anymore. Recent headlines from the past couple of weeks of the government capturing and analyzing massive amounts of daily phone call data should convince you.

That this report was shortly followed by more reports of the government tapping the big online data websites like Google, Yahoo, and such for even more data should alert you to three things:

1—There is a massive amount of data out there that can be collected and analyzed.

2—Companies are amassing incredible volumes of data in the normal course of serving people who readily and knowingly give their data to these organizations. (This blogger is one of those tens of million .)

3—The tools and capabilities are mature enough for someone to sort through that data and connect the dots to deliver meaningful insights.

Particularly with regard to the last point this blogger thought the industry was still five years away from generating meaningful results from that amount of data coming in at that velocity. Sure, marketers have been sorting and correlating large amounts of data for years, but it was mostly structured data and not at nearly this much. BTW, your blogger has been writing about big data for some time.

If the news reports weren’t enough it became clear at IBM Edge 2013, wrapping up is Las Vegas this week, that big data analytics is happening and companies and familiar companies are succeeding at it now. It also is clear that there is sufficient commercial off-the-shelf computing power from companies like IBM and others and analytics tools from a growing number of vendors to sort through massive amounts of data and make sense of it fast.

An interesting point came up in one of the many discussions at Edge 2013 touching on big data. Every person’s data footprint is as unique as a fingerprint or other bio-metrics. We all visit different websites and interact with social media and use our credit and debit cards in highly individual ways. Again, marketers have sensed this at some level for years, but they haven’t yet really honed it down to the actual individual on a mass scale, although there is no technical reason one couldn’t. You now can, in effect, market to a demographic of one.

A related conference is coming up Oct. 21-25 in Orlando, Fl., called Enterprise Systems 2013.  It will combine the System z and the Power System Technical University along with a new executive-focused Enterprise Systems event. It will include new announcements, peeks into trends and directions, over 500 expert technical sessions across 10 tracks, and a comprehensive solution center. This blogger has already put it on his calendar.

There was much more interesting information at Edge 2013, such as using data analytics and cognitive computing to protect IT systems.  Perimeter defense, anti-virus, and ID management are no longer sufficient. Stay tuned.

, , , , , , , , , , , ,

Leave a comment

IT Chaos Means Opportunity for the CIO

Hurricanes, hybrid superstorms, earthquake-tsunami combinations, extreme heat, heavy snow in April are just a few signs of chaos. For IT professionals specifically, chaos today comes from the proliferation of smartphones and BYOD or the deluge of data under the banner of big data. A sudden shift to the deployment of massive numbers of ARM processors or extreme virtualization might trigger platform chaos.  A shortage of sufficient energy can lead to another form of chaos. Think of it this way: chaos has become the new normal.

Big consulting firms have latched onto the idea of chaos. Deloitte looks to enterprise data management to create order out of chaos. At Capgemini, the need of organizations to increasingly deal with unstructured processes that ordinary Business Process Management (BPM) solutions were not designed to cope with can be enough to lead to chaos. Their solution: developing case management around a BPM solution – preferably in conjunction with an Enterprise Content Management system – solves many of the problems

Eric Berridge, co-founder of Bluewolf Group, a leading consulting firm specializing in Salesforce.com implementations, put it best when he wrote in a recent blog that CIOs must learn to harness chaos for a very simple reason: business is becoming more chaotic. Globalization and technology, which have turned commerce on its head over the past 20 years, promise an even more dizzying rate of change in the next decade.

Berridge’s solution draws on the superhero metaphor. The CIO has to become Captain Chaos, the one able to overcome a seemingly insurmountable level of disarray to deliver the right value at the right time. And you do that my following a few straightforward tips:

First, don’t build stuff you don’t absolutely have to build. You want your organization to travel as light as possible. If you build systems you are stuck with them. Instead, you want to be able to change systems as fast as the business changes in response to whatever chaos is swirling at the moment. That means you need to aim for an agile IT infrastructure, probably one that can take tap a variety of cloud services and turn them on and off as needed.

Then, recognize the consumerization of IT and the chaos it has sparked.  This is not something to be resisted but embraced and facilitated in ways that give you and your organization the measure of control you need. Figure out how to take advantage of the consumerization of IT through responsive policies, elastic infrastructure, and flexible security capabilities.

Next, encourage the organization’s R&D and product development groups to also adopt agile methods and approaches to innovation, especially through social media and other forms of collaboration. Even encourage them to go a step further by reaching out to customers to participate.  Your role as CIO at this point is to facilitate interaction among the parties who can create successful innovation.

Finally, layer on enough just-in-time governance to enable the organization to manage the collaboration and interactivity. The goal is to rein in chaos and put it to work. To do that you need to help set priorities, define objectives, execute plans, and enforce flexible and agile policies—all the things that any successful business needs to do but do so in the context of a chaotic world that is changing in ways you and top management can’t predict.

As CIO this puts big demands on you too. To start, you have to keep your finger on the pulse of what is happening with the world at large, in business and with technology. That means you need to figuratively identify and place sensors and monitors that can tip you off as things change. You also can’t master every technology. Instead you need to identify an ever-changing stable of technology masters you can call on as needed and familiarize yourself with the vast amount of resources available in the cloud.

In the end, these last two points—a stable of technology masters you can call upon and deep familiarity with cloud resources—will enable you to deliver the most value to your organization despite the chaos of the moment. At that point you truly become Captain Chaos, the one your organization counts on to deal with ever changing chaos.

, , , , , , , , , , , , ,

Leave a comment

5 Things CIOs should be Thankful for This Thanksgiving

CIOs have a number of things from a technology standpoint to be thankful about. You have been reading about these technologies all year here and here.

These help you reduce costs, improve business processes, and boost your efficiency and the efficiency of your organization:

  1. Virtualization—increases the utilization and flexibility of IT resources
  2. Cloud computing—enables you to efficiently consume and deliver business capabilities as services
  3. Mobile devices (smartphones, tablets)—un-tethers you from the constraints of the office, location, and time
  4. Social business—enables new ways to get close to your customers and turn them into evangelists for your business
  5. Moore’s Law—ensures that the cost of IT capabilities continues to steadily drop on a per-unit-of-work basis as it has for decades.

Are all of these unqualified, 100% gains with no downsides? Probably not (with the exception of Moore’s Law), but no organization that has benefited from any of wants to go back.  Happy Thanksgiving.

, , , ,

Leave a comment

Coping with Increased Data Center Complexity

Last week Symantec, a leading data center software tools provider, released its annual state of the data center survey results. You can view the full report here. The overriding issue, it turns out, is the increasing complexity of the data center. As CIO you’re probably aware of this, but there seems to be little you can do except request more budget and more resources. Or is there?

Although the study cites a number of factors driving data center complexity, survey respondents appear to focus in on one primary response, an increased an increased need for governance. This is not something a CIO would typically initiate. Also suggested is taking steps to intelligently manage organizational resources in an effort to rein in operational costs and control information growth.

More specifically, Symantec suggests that organizations implement controls such as standardization or establish an information governance strategy to keep information from becoming a liability. Nobody doubts that the seemingly unrestrained proliferation of data and of systems that generate it and use it are driving data center complexity.  But don’t blame IT alone; it is the business that is demanding everything from mobility to analytics.

The leading complexity driver, cited by 65% of the respondents, turns out to be the increasing number of business-critical applications. Other key drivers of complexity include growth in the volume of data, mobile computing, server virtualization, and cloud computing.

Organizations may benefit from mobile computing and the efficiency and agility that result from virtualization and cloud computing, but these capabilities don’t come without a cost. In fact, the most commonly mentioned impact was higher costs, cited by nearly half of the or­ganizations surveyed, as an effect of complexity. Without budgets increasing commensurately, organizations gain valuable capabilities in one area only by constraining activity in other areas.

Other impacts cited by respondents include: reduced agility (cited by 39% of respondents); longer lead times for storage migration (39%) and provisioning storage (38%); longer time to find informa­tion (37%); security breaches (35%); lost or misplaced data (35%); increased downtime (35%); and compliance incidents (34%).

Increase downtime should raise a few eyebrows. In a modern enterprise when systems go down work and productivity essentially slow to a halt. Some workers can improvise for a while but they can only go so far.  The survey found the typical organization experienced an average of 16 data center outages in the past 12 months, at a total cost of $5.1 million. The most common cause was systems failures followed by human error and natural disasters.

According to the survey, organizations are implementing several measures to reduce complexity, including training, standardization, centralization, virtualization, and increased budgets. The days of doing more with less should be over for now as far as the data center is concerned: 63% consider increasing their budget to be somewhat or extremely important in dealing with data center complexity.

But the biggest initiative organizations are undertaking is to implement a comprehensive information governance strategy; defined as a formal program that allows organizations to proactively classify, retain, and discover information in order to reduce information risk, reduce the cost of managing information, establish retention policies, and streamline the eDiscovery process. Fully 90% of organizations are either discussing information governance or have implemented trials or actual programs.

While there are technology tools to assist with data center governance, this is not an issue that responds to an IT solution. This kind of governance mostly requires meetings among the business and IT to hash out the ownership and responsibility for various data, establish policies and procedures, and then lay out monitoring and enforcement. None of this is rocket science, but it does take time and resources.

Symantec goes on to make the following recommendations:

  • Establish C-level ownership of information governance.
  • Get visibility beyond IT platforms down to the actual business services.
  • Understand what IT assets you have, how they are being consumed, and by whom.
  • Reduce the number of backup applications to meet recovery SLAs.
  • Deploy deduplication everywhere to help constrain the information explosion.
  • Use appliances to simplify server and storage operations across physical and virtual machines.

You can also rationalize systems by eliminating redundant or unused applications, consolidate the system and vendors who provide them to a small handful, and standardize on a few platforms and operating systems. Strategies like BYOD, in that case, become a prescription for complexity.

The world in general is becoming more complex, and this is especially apparent in the data center due to increasing demands by the business for various IT services and the need to manage ever-growing amounts of information. Unless you take steps to rein it in, it will only get worse.

, , , , , ,

Leave a comment

Low-Cost Fast Path to Private Cloud

The private cloud market—built around a set of virtualized IT resources behind the organization’s firewall—is growing rapidly. Private cloud vendors have been citing the latest Forrester prediction of the private cloud market to growth to more than $15 billion in 2020. Looking at a closer horizon, IDC estimates the private cloud market will grow to $5.8 billion by 2015.

 The appeal of the private cloud comes from its residing on-premise and its ability to leverage existing IT resources wherever possible. Most importantly, the private cloud addresses the concerns of business executives about cloud security and control.

The promise of private clouds is straightforward:  more flexibility and agility from their systems, lower total costs, higher utilization of the hardware, and better utilization of the IT staff. In short organizations want all the benefits of the public cloud computing along with the security of keeping it private behind enterprise firewall.

Private clouds can do this by delivering IT as a service and freeing up IT manpower through self-service automation. The private cloud sounds simple. They don’t, however, come that easily. They require sophisticated virtualization and automation.  “Up-front costs are real, and choosing the right vendor to manage or deploy an environment is equally important,” says senior IDC analyst Katie Broderick.

IBM, however, may change the private cloud financial equation with its newest SmartCloud Entry offering based on IBM System x (x86 servers) and VMware.  The starting price is surprisingly low, under $60,000.

The IBM SmartCloud Entry starts with a flexible, modular design that can be installed quickly. It also can handle integrated management; automated provisioning through a service request catalog, approvals, metering, and billing; and do it all through a consolidated management console, a single pane of glass. The result: the delivery of standardized IT services on the fly and at a lower cost through automation. A business person, according to IBM, can self-provision services through SmartCloud Entry in four mouse clicks,.  something even a VP can handle.

The prerequisite for any private cloud is virtualized systems.  Start by consolidating and virtualizing servers, storage, and networking to reduce operating and capital expenses and streamline systems management. Virtualization is essential to achieve the flexibility and efficiency organizations want from their private cloud. They must virtualize as the first step in IBM’s SmartCloud Entry or any other private cloud.

From there you improve speed and business agility through SmartCloud Entry capabilities like automated service deployment, portal-based self-service provisioning, and simplified administration.  In short you create master images of the desired software, convert the images for use with inexpensive tools like the open source KVM hypervisor, and track the images to ensure compliance and minimize security risks. Finally you can gain efficiency by reducing both the number of images and the storage required for them. From there just deploy the software images on request through end user self-service combined with virtual machine isolation capabilities and project-level user access controls for security.

By doing this—deploying and maintaining the application images, delegating and automating the provisioning, standardizing deployment, and simplifying administration—the organization can cut the time to deliver IT capabilities through a private cloud from months to 2-3 days, actually to just hours in some cases. This is what enables business agility—the ability to respond to changes fast—with reduced costs through a more efficient operation.

At $60k the IBM x86 SmartCloud Entry offering is a good place to start although IBM has private cloud offerings for Linux and Power Systems as well. But all major IT vendors are targeting private clouds though few can deliver as much of the stack as IBM. Microsoft offers a number of private cloud solutions here. HP provides a private cloud solution for Oracle, here, while Oracle has an advanced cluster file system for private cloud storage here.  NetApp, primarily a storage vendor, has partnered with others to deliver a variety of NetApp private cloud solutions for VMware, Hyper-V, SAP, and more.

, , , , , ,

Leave a comment