Posts Tagged cloud computing

The Data Center’s Hybrid Cloud Future

Nearly half of large enterprises will have hybrid cloud deployments by the end of 2017. Hybrid cloud computing is at the same place today that private cloud was three years ago; actual deployments are low, but aspirations are high, says Gartner in a published news note from Oct. 2013.

Almost every organization today utilizes some form of cloud computing, usually public clouds.  In its State of the Cloud survey RightScale found 94% of organizations are running applications or experimenting with infrastructure-as-a-service, 87% of organizations are using public cloud, and 74% of enterprises have a hybrid cloud strategy and more than half of those are already using both public and private cloud. RightScale estimates may be a bit more generous than Gartner but both come to the same conclusion in the end: Hybrid cloud is the approach of choice.

Executive management, however, prefers private clouds for the control and security they promise. That the actual control and security may not be much better than what the organization could achieve in the public cloud if it were rolled out properly, but executives don’t understand that. So private clouds are management’s darling for now.

Private clouds, however, fail to deliver the key benefits of cloud computing—cost efficiency and business agility. The organization still has to invest in all the IT resources, capacity, and capabilities of the private cloud. Unlike the public cloud, these are not shared resources. They may repurpose some of their existing IT investment for the private cloud but they invariably will again have to acquire additional IT resources and capacity as before. And, the demand for resources may increase as business units come to like IT-as-a-Service (ITaaS), the rationale for private clouds in the first place.

As for business agility with private clouds—forget it. If new capabilities are required to meet some new business requirement, the organization will have to build or acquire that capability as it did before. The backlogs for developing new capabilities do not magically go away with ITaaS and private clouds. If business agility requires the business to pivot on a moment’s notice to meet new challenges and opportunities there is only one way the private cloud can do it–develop it in-house, the old fashioned way.

Hybrid clouds provide the answer. Gartner, Inc. defines a hybrid cloud as a cloud computing service that is composed of some combination of private, public and community cloud services from different service providers. In the hybrid cloud scenario, the company can rely on its private cloud and happily cruise along until it needs a capability or resource it can’t deliver. Then the company reaches out through the hybrid cloud to the public cloud for the required capability. Rather than build it, the organization basically rents the capability, paying only for what it uses when it uses it. This is ideal when the organization needs to temporarily augment resources, capacity, or capabilities to meet an unanticipated need.

Hybrid clouds, unfortunately, don’t just pop up overnight. First you need to lay the groundwork for your hybrid cloud. That entails identifying the specific cloud resources and services in advance, making the necessary financial arrangements with appropriate public cloud vendors, and establishing and testing the connections. Also, check with your auditors who will want to be assured about security and governance and similar details.

While you are at it, make sure your networking and security teams are on board. Ports will need to be opened; the firewall gods will need to be appeased. You also will need to think about how these new capabilities and services will integrate with the capabilities and services you already have. This isn’t necessarily a major undertaking as IT projects go but will take some time—days or, more likely, a few weeks—to get the approvals, assemble all the pieces, and get them configured and tested and ready to deploy.

As RightScale notes: although the use of cloud is a given, enterprises often have different strategies that involve varying combinations of public, private, and hybrid cloud infrastructure. For most, however, the hybrid cloud provides the best of all cloud worlds, especially in terms of cost and agility. You can run ITaaS from your private cloud and pass through your hybrid cloud whenever you need public cloud resources you don’t have in house.  Just make sure you set it up in advance so it is ready to go whenyou need it.

, , , , , , , , , ,

Leave a comment

Best TCO—System z vs. x86 vs. Public Cloud

IBM recently analyzed various likely customer workload scenarios and found that the System z as an enterprise Linux server could consistently beat x86 machines and even public cloud providers like AWS in terms of TCO.  The analysis was reasonably evenhanded although, like automobile mileage ratings, your actual results may vary.

This blogger has long contended that the z Enterprise Linux Server acquired under the deeply discounted IBM System z Solution Edition program could beat comparable x86 systems not only in terms of TCO but even TCA. Algar, a Brazilian telecom, acquired its initial zEnterprise Linux server to consolidate a slew of x86 systems and lay a foundation for scalable growth. It reports cutting data center costs by 70%. Nationwide Insurance, no newcomer to mainframe computing, used the zEnterprise to consolidate Linux servers, achieving $46 million in savings.

The point: the latest IBM TCO analyses confirm what IBM has been saying. TCO advantage, IBM found, switches to the z Enterprise Linux Server at around 200 virtual servers compared to the public cloud and a bit more VMs compared to x86 machines. View the IBM z TCO presentation here.

IBM further advanced its cause in the TCO/TCA battle with the recent introduction of the IBM Enterprise Cloud System. This is a factory-built and integrated system—processor, memory, network, IFLs, virtualization management, cloud management, hypervisor, disk orchestration, Linux OS—priced (discounted) as a single solution. IBM promises to deliver it in 45 days and have it production ready within hours of hitting the organization’s loading dock. Of course, it comes with the scalability, availability, security, manageability, etc. long associated with the z, and IBM reports it can scale to 6000 VMs. Not sure how this compares in price to a Solution Edition Enterprise Linux Server.

The IBM TCO analysis compared the public cloud, x86 cloud, and the Enterprise Cloud System in terms power and space, labor, software/middleware, and hardware costs when running 48 diverse (a range of low, medium, and high I/O) workloads. In general it found an advantage for the z Enterprise Cloud System of 34-73%.  The z cost considerably more in terms of hardware but it more than made up for it in terms of software, labor, and power. Overall, the TCO examined more than 30 cost variables, ranging from blade/IFL/memory/storage amounts to hypervisor/cloud management/middleware maintenance.

In terms of hardware, the z included the Enterprise Linux Server, storage, z/VM, and IBM Wave for z/VM. Software included WebSphere Application Server middleware, Cloud Management Suite for z, and Tivoli for z/VM. The x86 cloud included HP hardware with a hypervisor, WebSphere Application Server, SmartCloud Orchestrator, SmartCloud Monitoring, and Tivoli Storage Manager EE. Both analyses included labor to manage both hardware and VMs, power and space costs, and SUSE Linux.

The public cloud assumptions were a little different. Each workload was deployed as a separate instance. The pricing model was for reserved instances. Hardware costs were based on instances in east US region with SUSE, EBS volume, data in/out, support (enterprise), free and reserved tier discounts applied. Software costs included WebSphere Application Server ND (middleware) costs for instances. A labor cost was included for managing instance.

When IBM applied its analysis to 398 I/O-diverse workloads the results were similar, 49-75% lower cost with the Cloud System on z. Again, z hardware was considerably more costly than either x86 or the public cloud. But z software and labor was far less than the others. In terms of 3-year TCO, the cloud was the highest at $37 M, x86 came in at $18.3 M, and the Cloud on z cost $9.4 M. With 48 workloads, the z again came in with lowest TCO at $1 M compared to $1.6 M for x86 systems, and $3.9 M for the public cloud.

IBM tried to keep the assumptions equivalent across the platforms. If you make different software or middleware choices or a different mix of high-mid-low I/O workloads your results will be different but the rankings probably won’t change all that much.

Also, there still is time to register for IBM Edge2014 in Las Vegas. This blogger will be there hanging around the bloggers lounge when not attending sessions. Please join me there.

Follow Alan Radding/BottomlineIT on Twitter: @mainframeblog

, , , , , , , , , , , , , , , , , , , , , , , , ,

Leave a comment

New Enterprise Systems Maturity Model

Does your shop use maturity models to measure where you stand and where you should be going compared with industry trends and directions? Savvy IT managers often would use such a model to pinpoint where their organization stood on a particular issue  as part of their pitch for an increased budget to hire more people or acquire newer, faster, greater IT capabilities.

Today maturity models are still around but they are more specialized now. There are, for instance, maturity models for security and IT management. Don’t be surprised to see maturity models coming out for cloud or mobile computing if they are not already here.

Earlier this year, Compuware introduced a new maturity model for the enterprise data center.  You can access it here. Compuware describes the new maturity model as one that helps organizations assess and improve the processes for managing application performance and costs as distributed and mainframe systems converge.

Why now? The data center and even the mainframe have been changing fast with the advent of cloud computing, mobile, and big data/analytics. Did your enterprise data center team ever think they would be processing transactions from mobile phones or running analytic applications against unstructured social media data? Did they ever imagine they would be handling compound workloads across mainframes and multiple distributed systems running both Windows and Linux?  Welcome to the new enterprise data center normal.

Maybe the first difference you’ll notice in the new maturity model are the new types of people populating the enterprise data center. Now you need to accommodate distributed and open systems along with the traditional mainframe environment. It requires that you bring together completely different teams and integrate them. Throw in mobile, big data, analytics, and social and you have a vastly different reality than you had even a year ago.  And with that comes the need to bridge the gap that has long existed between the enterprise (mainframe) and distributed data center teams. This is a cultural divide that will have to be navigated, and the new enterprise IT maturity model can help.

The new data center normal, however, hasn’t changed data center economics, except maybe to exacerbate the situation. The data center has always been under pressure to rein in costs and use resources, both CPUs and MIPS, efficiently.  Those pressures are still there but only more so because the business is relying on the data center more than ever before as IT becomes increasingly central to the organization’s mission.

Similarly, the demand for high levels of quality of service (QoS) not only continues unabated but is expanding. The demand for enterprise-class QoS now extends to compound workloads that cross mainframe and distributed environments, leaving the data center scrambling to meet new end user experience (EUE) expectations even as it pieces together distributed system QoS work-arounds. The new enterprise IT maturity model will help blend these two worlds and address the more expansive role IT is playing today.

To do this the model combines distributed open systems environments with the mainframe and recognizes different workloads, approaches, processes, and tooling. It defines five levels of maturity: 1) ad hoc, 2) technology-centric, 3) internal services-centric, 4) external services-centric, and 5) business-revenue centric.

Organizations at the ad hoc level, for example, primarily use the enterprise systems to run core systems and may still employ a green screen approach to application development. At the technology-centric level, there’s an emphasis on infrastructure monitoring to support increasing volumes, higher capacity, complex workload and transaction processing along with greater MIPS usage. As organizations progress from internal services-focused to external services-focused, mainframe and distributed systems converge and EUE and external SLAs assume a greater priority.

Finally, at the fifth or business centric level, the emphasis shifts to business transaction monitoring where business needs and EUE are addressed through interoperability of the distributed systems and mainframes with mobile and cloud systems. Here technologies provide real-time transaction visibility across the whole delivery chain, and IT is viewed as a revenue generator. That’s the new enterprise data center normal.

In short, the new enterprise maturity model requires enterprise and distributing computing come together and all staff work together closely; that proprietary systems and open systems interoperate seamlessly. And there is no time for delay. Already, DevOps, machine-to-machine computing (the Internet of Things), and other IT strategies, descendents of agile computing, are gaining traction while smart mobile technologies drive the next wave of enterprise computing.

, , , , , , , , , , , , , , , , , ,

Leave a comment

Change-proof Your Organization

Many organizations are being whiplashed by IT infrastructure change—costly, disruptive never changes that are hindering IT and the organization.  You know the drivers: demand for cloud computing, mobile, social, big data, real-time analytics, and collaboration. Don’t forget to add soaring transaction volumes, escalating amounts of data, 24x7x365 processing, new types of data, proliferating forms of storage, incessant compliance mandates, and more keep driving change. And there is no letup in sight.

IBM started to articulate this in a blog post, Infrastructure Matters. IBM was focusing on cloud and data, but the issues go even further. It is really about change-proofing, not just IT but the business itself.

All of these trends put great pressure on the organization, which forces IT to repeatedly tweak the infrastructure or otherwise revamp systems. This is costly and disruptive not just to IT but to the organization.

In short, you need to change-proof your IT infrastructure and your organization.  And you have to do it economically and in a way you can efficiently sustain over time. The trick is to leverage some of the very same  technology trends creating change to design an IT infrastructure that can smoothly accommodate changes both known and unknown. Many of these we have discussed in BottomlineIT previously:

  • Cloud computing
  • Virtualization
  • Software defined everything
  • Open standards
  • Open APIs
  • Hybrid computing
  • Embedded intelligence

These technologies will allow you to change your infrastructure at will, changing your systems in any variety of ways, often with just a few clicks or tweaks to code.  In the process, you can eliminate vendor lock-in and obsolete, rigid hardware and software that has distorted your IT budget, constrained your options, and increased your risks.

Let’s start by looking at just the first three listed above. As noted above, all of these have been discussed in BottomlineIT and you can be sure they will come up again.

You probably are using aspects of cloud computing to one extent or another. There are numerous benefits to cloud computing but for the purposes of infrastructure change-proofing only three matter:  1) the ability to access IT resources on demand, 2) the ability to change and remove those resources as needed, and 3) flexible pricing models that eliminate the upfront capital investment in favor of paying for resources as you use them.

Yes, there are drawbacks to cloud computing. Security remains a concern although increasingly it is becoming just another manageable risk. Service delivery reliability remains a concern although this too is a manageable risk as organizations learn to work with multiple service providers and arrange for multiple links and access points to those providers.

Virtualization remains the foundational technology behind the cloud. Virtualization makes it possible to deploy multiple images of systems and applications quickly and easily as needed, often in response to widely varying levels of service demand.

Software defined everything also makes extensive use of virtualization. It inserts a virtualization layer between the applications and the underlying infrastructure hardware.  Through this layer the organization gains programmatic control of the software defined components. Most frequently we hear about software defined networks that you can control, manage, and reconfigure through software running on a console regardless of which networking equipment is in place.  Software defined storage gives you similar control over storage, again generally independent of the underlying storage array or device.

All these technologies exist today at different stages of maturity. Start planning how to use them to take control of IT infrastructure change. The world keeps changing and the IT infrastructures of many enterprises are groaning under the pressure. Change-proofing your IT infrastructure is your best chance of keeping up.

, , , , , , , , , , , , , , ,

1 Comment

Fresh Toys Help IT Open New Business Opportunities in 2014

Much of what to expect for IT for 2014 you have already glimpsed right here in BottomlineIT although some will be startlingly new. The new stuff will address opportunities in many cases that businesses are just starting to consider. Some of these, like 3D-printing or e-wallet, have the potential to radically change the way business operates.

Let’s start with what you already know: 2014 will be cloud everything; which is being steadily absorbed into business DNA as it evolves into the predominant way companies go to market, relate with their customers and partners, find employees, and deliver increasing aspects of their products as online services.

Also expect the continued hyping of big data, especially the unstructured data found everywhere, and analytics, which is necessary to make sense of the data. In 2014 analytics will be augmented by real-time analytics and predictive analytics, both of which can indeed deliver measurable business value.

In 2014 everything will be virtualized. In the process it will become defined by software. That means it will be programmable, allowing you to change its capabilities almost at will. Virtualized, software-defined capabilities will be in the products you acquire and the appliances you buy. You next car will be software-defined and Internet (cloud) connected. Your video-enabled car will be able to park in a tighter space than you can park it yourself.

Mobile, in the form of smartphones and tablet devices, will be the devices of choice for more and more people worldwide. Your mobile device will increasingly handle your communications, shopping, purchasing, socializing, entertainment, and work tasks even as it take over more of the functions of your wallet. Eventually the e-wallet will contain your identification, memberships, subscriptions, credit and debit cards as security gets bolstered,

On to the completely new: business drones are coming; mainly in the form of smart, software defined and programmable devices that can do errands. Basically they are taking robotics to a new level. Amazon hopes to use them to deliver items to your doorstep within hours of your purchase. What might your business do with a capability like this?

3D-printing is BottomlineIT’s favorite. Where the Internet disintermediated much of the traditional supply chain and distribution channel, 3D-printing can disintermdiate manufacturers by producing the physical product at your desk. Now software-defined, customizable mass products can be cost-effectively manufactured at scale for a market of just one. With 3D-printing you can deliver a customizable version of your widget to a customer as readily as you send a fax. Can you make some money with that capability?

Finally, smart, wearable, cloud-connected computers in the form of wrist watches (remember old Dick Tracy comics) and eye wear. Google Glass will become increasingly commonplace. Exactly what will be the business value of Google Glass remains unclear. Right now you buy it for the extreme cool factor.

So expect new IT goodies around the digital Xmas tree starting to arrive this year but in quantity by the end of 2014. Some may be a bust; others may be late in coming. As CIO, your job is to figure out which of these help can you meet your organization’s business goals. Best wishes for 2014.

, , , , , , , , , , , , ,

Leave a comment

Five Reasons Businesses Use the Cloud that IT Can Live With

By 2016, cloud will matter more to business leaders than to IT, according to the IBM Center for Applied Insights. In fact, cloud’s strategic importance to business leaders is poised to double from 34% to 72%. That’s more than their IT counterparts where only 58% acknowledge its strategic importance.

This shouldn’t be surprising. Once business leaders got comfortable with the security of the cloud it was just a matter of figuring out how to use it to lower costs or, better yet, generate more revenue faster. IT, on the other hand, recognized the cloud early on as a new form of IT outsourcing and saw it as a direct threat, which understandably dampened their enthusiasm.

IBM’s research—involving more than 800 cloud decision makers and users—painted a more business-friendly picture that showed the cloud able to deliver more than just efficiency, especially IT efficiency. Pacesetting organizations, according to IBM, are using cloud to gain competitive advantage through strategic business reinvention, better decision making, and deeper collaboration. And now the business results to prove it are starting to roll in. You can access the study here.

IT, however, needn’t worry about being displaced by the cloud. Business managers still lack the technical perspective to evaluate and operationally manage cloud providers. In addition, there will always be certain functions that best remain on premise. These range from conformance with compliance mandates to issues with cloud latency to the need to maintain multiple sources of IT proficiency and capability to ensure business continuance. Finally, there is the need to assemble, maintain, and manage an entire ecosystem of cloud providers (IaaS, PaaS, SaaS, and others) and services like content distribution, network acceleration, and more.  So, rest assured; if you know your stuff, do it well, and don’t get greedy the cloud is no threat.

From the study came five business reasons to use the cloud:

1)      Better insight and visibility—this is the analytics story; 54% use analytics to derive insights from big data, 59% use it to share data, and 59% intend to use cloud to access and manage big data in the future

2)      Easy collaboration—cloud facilitates and expedites cross-functional collaboration, which drives innovation and boosts productivity

3)      Support for a variety of business needs by forging a tighter link between business outcomes and technology in areas like messaging, storage, and office productivity suites; you should also add compute-business agility

4)      Rapid development of new products and services—with  52% using the cloud to innovate products and services fast and 24% using it to offer additional product and services; anything you can digitize, anything with an information component can be marketed, sold, and delivered via the cloud

5)      Proven results –25% reported a reduction in IT costs due to the cloud, 53% saw an increase in efficiency, and 49% saw improvement in employee mobility.

This last point about mobility is particularly important. With the advent of the cloud geography is no longer a constraining business factor. You can hire people anywhere and have them work anywhere. You can service customers anywhere. You can source almost any goods and services from anywhere. And IT can locate data centers anywhere too.

Yes, there are things for which direct, physical interaction is preferred. Despite the advances in telemedicine, most people still prefer an actual visit to the doctor; that is unless a doctor simply is not accessible. Or take the great strides being made in online learning; in a generation or two the traditional ivy covered college campus may be superfluous except, maybe, to host pep rallies and football games. But even if the ivy halls aren’t needed, the demand for the IT capabilities that make learning possible and enable colleges to function will only increase.

As BottomelineIT has noted many times, the cloud is just one component of your organization’s overall IT and business strategy.  Use it where it makes sense and when it makes sense, but be prepared to alter your use of the cloud as changing conditions dictate. Change is one of the best things at which the cloud is best.

, , , , , , , , ,

Leave a comment

Where Have All the Enterprise IT Hardware Vendors Gone?

Remember that song asking where all the flowers had gone? In a few years you might be asking the same of many of today’s enterprise hardware vendors.  The answer is important as you plan your data center 3-5 years out.  Where will you get your servers from and at what cost? Will you even need servers in your data center?  And what will they look like, maybe massive collections of ARM processors?

As reported in The Register (Amazon cloud threatens the entire IT ecosystem): Amazon’s cloud poses a major threat to most of the traditional IT ecosystem, a team of 25 Morgan Stanley analysts write in a report, Amazon Web Services: Making Waves in the IT Pond, that was released recently. The Morgan Stanley researchers cite Brocade, NetApp, QLogic, EMC and VMware as facing the greatest challenges from the growth of AWS. The threat takes the form of AWS’s exceeding low cost per virtual machine instance.

Beyond the price threat, the vendors are scrambling to respond to the challenges of cloud, mobile, and big data/analytics. Even Intel, the leading chip maker, just introduced the 4th generation Intel® Core™ processor family to address these challenges.  The new chip promises optimized experiences personalized for end-users’ specific needs and offers double the battery life and breakthrough graphics targeted to new low cost devices such as mobile tablets and all-in-one systems.

The Wall Street Journal online covered related ground from a different perspective when it wrote: PC makers unveiled a range of unconventional devices on the eve of Asia’s biggest computer trade show as they seek to revive (the) flagging industry and stay relevant amid stiff competition. Driven by the cloud and the explosion of mobile devices in a variety of forms the enterprise IT industry doesn’t seem to know what the next device should even be.

Readers once chastised this blogger for suggesting that their next PC might be a mobile phone. Then came smartphones, quickly followed by tablets. Today PC sales are dropping fast, according to IDC.

The next rev of your data center may be based on ARM processors (tiny, stingy with power, cheap, cool, and remarkably fast), essentially mobile phone chips. They could be ganged together in large quantities to deliver mainframe-like power, scalability, and reliability at a fraction of the cost.

IBM has shifted its focus and is targeting cloud computing, mobile, and big data/analytics, even directing its acquisitions toward these areas as witnessed by yesterday’s SoftLayer acquisition. HP, Oracle, most of the other vendors are pursuing variations of the same strategy.  Oracle, for example, acquired Tekelec, a smart device signaling company.

But as the Morgan Stanley analysts noted, it really is Amazon using its cloud scale to savage the traditional enterprise IT vendor hardware strategies and it is no secret why:

  • No upfront investment
  • Pay for Only What You Use (with a caveat or two)
  • Price Transparency
  • Faster Time to Market
  • Near-infinite Scalability and Global Reach

And the more AWS grows, the more its prices drop due to the efficiency of cloud scaling.  It is not clear how the enterprise IT vendors will respond.

What will your management say when they get a whiff of AWS pricing. An extra large, high memory SQL Server database instance lists for $0.74 per hour (check the fine print). What does your Oracle database cost you per hour running on your on-premise enterprise server? That’s what the traditional enterprise IT vendors are facing.

, , , , , , , , , , , , , , , , , , , ,

Leave a comment