Archive for July, 2011

CIOs Have Reason to Worry About Social Media

The wisdom of a famous Boston politician—I don’t care what they say as long as they are talking about me—certainly does NOT apply to social media and today’s business organizations. To the contrary, according to a recent poll from data protection vendor Symantec,  the typical enterprise experiences nine social media incidents with 94% suffering negative consequences including damage to their reputations, loss of customer trust, data loss, and lost revenue. What gets said about your organization on social media indeed matters.

According to Gartner:  by the end of 2013, half of all companies will have been asked to produce material from social media websites for e-discovery.” That means enterprises need an information risk management governance strategy that specifically includes content created on social media.

The Symantec poll revealed that social media is pervasive within the enterprise, and CIOs have good reason to be concerned. At a minimum the organization faces increased litigation costs and risks. And the costs aren’t trivial; the poll found social media incidents cost the typical company $4 million over past 12 months.

The knee jerk reaction is to retreat from social media. That’s not a good idea. It would mean forgoing the considerable benefits to the organization’s brand and from customer engagement. In some cases, social media even drives the ringing of the cash register, especially when used as a major component in a multifaceted marketing campaign.

A better reaction is to develop a comprehensive strategy addressing policy and monitoring, governance and risk management, and archiving that includes social media content as well as email and conventional documents. This will entail a combination of effective management execution and investment in the deployment of new technologies.

The payoff, however, can be significant. Take just the top three social media incidents the typical enterprise experienced over the last year:

  1. Employees sharing too much information in public forums (46%),
  2. Loss or exposure of confidential information (41%),
  3. Increased exposure to litigation (37%)

More than 90% of respondents who experienced a social media incident also suffered negative financial consequences as a result. These included a drop in stock price (average loss: $1,038,401 USD), added litigation costs (average: $650,361 USD), direct financial costs (average: $641,993 USD), damaged brand reputation/loss of customer trust (average cost: $638,496 USD), and lost revenue (average: $619,360 USD).

How to avoid this: establish a social media policy, communicate it, and enforce it; then integrate it with your enterprise risk management and governance strategy; and finally, deploy policy-driven tools to automatically monitor social media activity and electronically archive social media content.

The key adult social media to watch today are Facebook and Twitter, but also LinkedIn and now Google Plus, which has the potential to change the social media landscape. In an upcoming post BottomlineIT will take up Google Plus and its possible ramifications for IT.

Please note that BottomlineIT will be on vacation for the next two weeks. New posts should resume the week of Aug. 15.

, , , , , , , , , , ,

Leave a comment

The Key to Virtualization and Cloud Success

From all the hype it seems cloud computing is driving everything having to do with IT and business success. But actually it is virtualization that both provides the foundation for cloud computing and drives it forward.

BottomlineIT has taken up virtualization before here. But virtualization is neither simple nor straightforward. Gartner late last year put out its virtualization hype cycle. Not sure much has substantively changed since then. As recently as this past spring, IT media were reporting Gartner saying: Virtualization is the #1 trend in IT, and will be through 2012.

A recent survey by CA Technologies on the state of IT automation, however, might dampen some of the virtualization enthusiasm.   The survey suggests that virtualization is not delivering the benefits managers were led to expect. The survey of 460 decision-makers from midsize and large enterprises found more than 60% are disappointed in virtualization’s ability to deliver savings. But the survey also hinted at the solution.

A large majority cited reducing costs (85%) and increasing server utilization (84%) as the primary reasons to deploy virtualization. Of the respondents, 63% noted they have not experienced as much savings as expected, and 5% said the complexities of virtualization had actually introduced new costs.

Increased complexity, indeed, may be virtualization’s dirty little secret. Virtualization adds, at the least, another layer to the multi-layer IT infrastructure that exists in many organizations today. At a minimum, virtualization requires new skills on the part of IT and new tools, both of which require new investments. Organizations not prepared to invest in new tools and new training will find it difficult to capture the virtualization benefits they expected.

The CA survey quotes one respondent: “Virtualization is a bean counter’s dream, but it can be an operational nightmare.” The respondent, a senior IT manager, continued: “Change management is a huge overhead, as any changes need to be accepted by all applications and users sharing the same virtualization kit. While many organizations are seeing benefits from virtualization, such as reduced hardware spending and improved server utilization, these benefits often get overshadowed by the lack of productivity improvements in data center staffing and operations.”

The key to solving these problems is management automation. The survey shows a direct correlation between IT service automation in a virtualized environment and cost-savings. For example, 44% of survey respondents who said most of their server provisioning processes are automated report they have significantly reduced costs through virtualization. Conversely, 48% of those who said the complexities of virtualization have introduced new costs also said—don’t be shocked—most of their server provisioning processes still are manual.

The complexity of IT infrastructures today combined with the volumes of disparate workloads and data running through them are so great that humans cannot possibly keep up. They need automated tools to find and correct problems before they impact the workloads. For organizations hoping to capitalize on self-service provisioning—where the big virtualization and cloud payoff lies—automation is a given from the start.

To realize the full benefits from virtualization and cloud computing, CA points out, IT organizations need to automate and integrate the physical and virtual server configuration, provisioning, monitoring, security, software patching, and more across the typical heterogeneous IT infrastructure. This will involve a new investment in automation tools, which don’t come cheaply.

CA offers tools to do this here.  IBM offers Tivoli Service Manager for cloud automation here.  HP also provides cloud service management automation here. So do others. But without automation from some vendor or another your virtualization efforts  mostly will be wasted.

, , , , , , , , ,

Leave a comment

New IBM z114—a Midrange Mainframe

IBM introduced its newest mainframe in the zEnterprise family, the z114, a business class rather than enterprise class machine. With the z114IBM can now deliver a more compelling total cost of acquisition (TCA) case, giving midrange enterprises another option as they consolidate, virtualize, and migrate their sprawling server farms. This will be particularly interesting to shops running HP Itanium or Oracle/Sun servers.

The z114 comes with a $75,000 entry price. At this price, it can begin to compete with commodity high end servers on a TCA basis, especially if it is bundled with discount programs likeIBM’s System z Solution Editions and unpublicized offers from IBM Global Finance (IGF). There should be no doubt, IBM is willing to deal to win midrange workloads from other platforms.

First, the specs, speeds, and feeds:  the z114 is available in two models; a single-drawer model, the M05, and a two-drawer model, the M10, which offers additional capacity for I/O and coupling expansion and/or more specialty engines. It comes with up to 10 configurable cores, which can be designated as general purpose or specialty engine (zIIP, zAAP, IFL, ICF) or used as spares. The M10 also allows two dedicated spares as well, a first for a midrange mainframe.

The z114 uses a superscalar design that runs at 3.8 GHz, an improved cache structure, a new out-of-order execution sequence, and over 100 new hardware instructions that deliver better per-thread performance, especially for database, WebSphere, and Linux workloads. The base z114 starts at 26 MIPS but can scale to over 3100 MIPS across five central processors and the additional horsepower provided by its specialty engines.

The z114 mainly will be a consolidation play. IBM calculates that workloads from as many as 300 competitive servers can be consolidated onto a single z114. IBM figures the machine can handle workloads from 40 Oracle server cores using just three processors running Linux. And compared to the Oracle servers IBM estimates the new z114 will cost 80% less. Similarly, IBM figures that a fully configured z114 running Linux on z can create and maintain a Linux virtual server for approximately $500 per year.

As a consolidation play, the zEnterprise System will get even more interesting later this year when x86 blades supporting Windows become available. Depending on the pricing, the z114 could become a Windows consolidation play too.

Today even midrange enterprises are multi-platform shops. For this, the z114 connects to the zBX, a blade expansion cabinet, where it can integrate and manage workloads running on POWER7-based blades as well as the IBM Smart Analytics Optimizer and WebSphere DataPower blades for integrating web-based workloads. In addition, IBM promises support for Microsoft Windows on select System x server blades soon.

To achieve a low TCA, IBM clearly is ready to make deals. For example, IBM also has lowered software costs to deliver the same capacity for 5-18% less through a revised Advanced Workload License Charges (AWLC) pricing schedule.  A new processor value unit (PVU) rating on IFLs can lower Linux costs as much as 48%.

The best deal, however, usually comes through the System z Solution Edition Program, which BottomlineIT’s sister blog, DancingDinosaur, has covered here and here.  It bundles System z hardware, software, middleware, and three years of maintenance into a deeply discounted package price. Initial System Editions for the z114 will be WebSphere, Linux, and probably SAP.

IFG also can lower costs, starting with a six month payment deferral. You can acquire a z114 now but not begin paying for it until the next year. The group also is offering all IBM middleware products, mainly WebSphere Application Server and Tivoli, interest free (0%) for twelve months. Finally, IFG can lower TCA through leasing. Leasing could further reduce the cost of the z114 by up to 3.5% over three years.

By the time you’ve configured the z114 the way you want it and netted out the various discounts, even with a Solutions Edition package, it will probably cost more than $75,000. Even the most expensive HP Itanium server beats that. As soon as there are multiple servers in a consolidation play, that’s where the z114 payback lies.

, , , , , , , , , , , , , , , , , , , , ,

Leave a comment

Time to Rethink Disaster Recovery

Disaster recovery (DR) has been challenging from the start, and it certainly isn’t getting any easier. Backup to disk has simplified some aspects of DR while virtualization helps in some ways and complicates it in others.

Large systems running mission critical workloads present a particularly difficult and costly DR challenge. Companies needing to meet very short (measured in seconds) RPO and RTO requirements typically have had to invest in pairs of systems set up as synchronized mirrors with synchronous replication. It works but it is costly and synchronous replication presents distance constraints.

For mainframes, the Geographically Dispersed Parallel Sysplex (GDPS) has been IBM’s primary DR vehicle. A recent IBM announcement expanded on the GDPS options primarily by adding remote asynchronous replication to greatly extend the distance between the paired systems.

DR at this level revolves around system clustering technology. You set up two systems, one as a mirror of the other, and update the data synchronously or asynchronously. When the primary system fails, you bring up the other and resume working as before. How you define your RPO and RTO determines how quickly you can resume operations following a failure and with how much data lag or loss.

Until now synchronous replication let you hit your tightest RPO and RTO. Synchronous replication, however, entails distance constraints that make it inappropriate for many organizations. It’s also quite expensive.

Asynchronous replication, however, is not bound by synchronous distance constraints. IBM offers GDPS/XRC and GDPS/GM, based upon asynchronous disk replication with unlimited distance. The current GDPS async replication products, however, require the failed site’s workload to be restarted at the recovery site, which typically will take 30-60 min. This will not satisfy organizations that require an RTO of seconds.

In its latest announcement IBM presents GDPS active/active continuous availability as the next generation of GDPS. This represents a shift from the failover model, from a situation where systems go down and can be brought online at the failover site in a few hours, to a near continuous availability model, where the system can be brought back online in an hour or less. IBM describes the latest enhancements as combining the best attributes of the existing suite of GDPS services and expands them to allow unlimited distances between your data center sites with the RTO measured in minutes. With its new GDPS offerings, IBM promises to achieve near continuous availability, meaning it can meet an RTO of tens of seconds.

Non-mainframe shops generally follow similar DR strategies using mirrored pairs of servers, monitoring and sensing software to detect a system failure, and switchover software. To hit the tightest RTO, you will set up your cluster as an active/active pair.

Of course, not every organization needs fast RTO. In that case, it can dispense with mirror systems altogether and rely on traditional tape backup and recovery to a standby site.

The concern with RTO usually focuses on the organization’s primary transaction production systems. But with the cloud organizations might begin to rethink what they deem mission-critical and how it should be backed up. Maybe they don’t have to think about mirrored system clusters at all. Maybe the mission critical systems to be protected aren’t even production transaction systems.

, , , , , , , , , ,

Leave a comment