Archive for August, 2011

No Such Thing as an Information Recession

The so-called recovery may be flaccid and the markets may be erratic, but one thing is certain: there is no recession when it comes to information and the need to store it and protect it. In a recent briefing IBM noted that storage demand is doubling every 18 months. Structured information is growing 32% each year while unstructured data is growing at 63% annually.

IT analyst Greg Schulz makes exactly this point in his blog here. Check out his discussion of ways to address the inexorable growth of demand for storage.

IT managers have known this for years. While their IT budgets have been constrained or reduced the demand for more data storage and data protection has not slowed at all. To the contrary, many organizations are storing more data and different types of data than they were just a few years ago. Now organizations, for instance, are capturing and storing click stream data, RFID data, social media posts and feeds, sensor and surveillance data, and more. One marketer found itself collecting and analyzing data from over 500 campaigns involving 3.2 million keywords. And that’s during what still feels like a recession.

 At the same time it seems like every storage vendor, large and small—IBM, EMC, HP, Dell, Nexsan, Oracle/Sun, Hitachi—have been announcing new products that promise to streamline and simplify the storage challenges facing IT. Make no mistake, given the amount of spending directed at information storage and data protection this should become a top fiduciary and compliance concern of every CIO.

To look at just a few vendors, in July IBM announced XIV Gen 3, a scalable storage product that includes a host of advanced features at no additional costs. In fact, the company claims it has reduced the total cost of ownership by 60% when compared to its biggest competitor, EMC. As important, IBM made it fully autonomic, meaning it can pretty much run itself, allowing for even more cost savings. Your staff can handle it with minimal training and effort.

The trick to XIV is a grid design that connects a set of modules consisting of a powerful processor, memory, and storage—in effect, a complete computer in itself. Multiple modules work together to provide seamless scalable storage. This design delivers predictable, sustained high performance storage with little or no intervention on the part of your staff. Plus, it brings a slew of high reliability and availability capabilities.

For example, adding an XIV module increases storage capacity along with a matching CPU and RAM to achieve near linear scalability and performance. Automatic rebalancing ensures load balance is maintained regardless of adding, deleting, or resizing volumes after new disk/module additions and even after a system component failure or during rebuild. And all without requiring human intervention.

Meanwhile, EMC announced earlier this year new capabilities for its Symmetrix VMAX storage systems, its top-of-the-line product. The new capabilities increase performance and simplify the way organizations can handle petabytes of information (1 petabyte = 1,000 terabytes). It also includes automated storage tiering, which organizes the storage around the performance and cost characteristics of different types of storage and moves the data automatically to the correct tier. All your staff has to do is categorize the data at the outset. The tiering capabilities can deliver up to 40% more application performance at a 40% lower cost while requiring 87% fewer disks and 75% less power.

HP too has been consistently bringing out advanced storage products. It acquired 3PAR and integrated its thin storage capabilities into the HP Converged Infrastructure. This will help companies take advantage of features like automated storage tiering and thin storage offerings that eliminate storage over-provisioning, a costly but common practice. This can help companies seeking to consolidate storage hardware yet still respond to explosive data growth. HP also introduced a guarantee that it would reduce capacity requirements by 50% or more through thin provisioning, and it introduced new federated storage software that enables companies to transparently move application workloads between disk systems in virtualized and cloud computing environments.

As noted above, almost every competitive storage vendor is refreshing its storage products to deliver simplified automated operation, storage virtualization, tiered storage, thin provisioning, and more.  If your information continues to grow despite the recession or limp recovery it is time to check if your storage has kept pace. The latest capabilities are available, pound for pound, at less less than you paid last time. And given the high value of the information asset, this will be a worthwhile investment.

, , , , , , , , ,

Leave a comment

Where Best to Run Linux

Mainframe data centers have many platform options for running Linux. The challenge is deciding where: x86 servers and blades, IBM Power Systems, HP Itanium, Oracle/Sun, IBM’s System z or zEnterprise.

Here is what IBM has to say about the various options: System z / zEnterprisePower/System p/System i, andSystem x blades and rack mount servers. And now with the zBX there is yet another option, Linux blades on the zBX. But this doesn’t answer the real question: where should your organization run Linux?

If you have only one platform the answer is simple. Linux has been widely ported. You can probably run it on whatever you already have.

Most organizations today, especially enterprise data centers, have multiple platforms running Windows, Linux, UNIX, AIX, Solaris, and more. And they run these on different hardware platforms from IBM, HP, Oracle/Sun, Dell, and others. Now the decision of where to run Linux gets complicated. The classic consultant/analyst response: it depends.

Again, IBM’s response is to lead the organization through a Fit for Purpose exercise. Here is how IBM discusses the exercise in regard to cloud computing. BottomlineIT’s sister blog addressed Fit for Purpose here last year.

The Fit for Purpose exercise, however, can be reduced to four basic choices:

  1. Where does the data that your Linux applications will use most reside—in general you will get the best end-to-end performance the closer the data is to the applications. So, if your Linux applications need to use DB2 data residing on the mainframe, you probably want the run Linux on the System z or a zBX blade.
  2. Since cost is always an issue look at the price/performance numbers—in this case you have to look at all the costs, paying particular attention to cost in terms of performance delivered. Running Linux on a cheap, underpowered x86 box may cost less but not deliver the performance you want.
  3. Available skills—here you need to look at where the Linux and platform skills available to you fall and opt for the platform where you have the most skills availability. Of course, a relatively modest investment in training can pay big dividends in this area.
  4. IT culture—to avoid resistance even if the data proximity or price/performance considerations fall one way, many might opt for the platform favored by the dominant IT culture.

Further complicating the decision is the lack of good data available on both the cost and the performance of Linux on the zBX or its Linux blades. Plus there are other variables to consider, such as whether you run Linux on an IFL on the z with or without z/VM. Similarly, you can run Linux on an x-platform with or without VMware or some other hypervisor. These choices will impact price, performance, and skills.

Although the answer to the question of where to run Linux may not be as simple as many would like, DancingDinosaur believes as a general principle it is always better to have choices.

, , , , , ,

Leave a comment

The Changing Definition of Mission-Critical

The IT systems you consider mission-critical almost certainly remain mission-critical today.  But are there other systems that should be receiving similar attention and protection too?

A new study by Springboard Research sponsored by Intel  may lead you to expand your idea of what constitutes mission-critical. The study found the idea of mission-critical computing is expanding from historical definitions to a far broader spectrum of workloads and applications.

What are your truly mission-critical systems? Certainly ERP and transaction processing systems remain mission-critical. Is CRM mission critical? How about Procurement? Or HR? Or Finance? When was the last time you prioritized your systems in terms of criticality to the organization.

Traditionally, mission-critical systems are those that are essential to the ability of the organization to survive. If a mission-critical system goes down the organization, effectively, is dead in the water.

Transaction processing systems almost always are mission-critical. If they go down the revenue stream stops. The classic examples of mission-critical systems beyond ERP are airline reservation systems, bank transaction systems, or the brokerage systems if investment firms. Every industry from manufacturing to healthcare to entertainment has its mission-critical systems.

Since they are mission-critical these systems get the bulk of the IT budget for security, recovery, availability, and data protection. Expanding or revising what is considered mission critical may require reordering budget priorities or reallocating IT resources.

In the Springboard study, 35% of respondents considered specialized or vertical applications as mission-critical. Another 14% of executives put ERP systems in the same category while collaboration tools as well as financial and accounting applications were highlighted as mission-critical by 10% of those surveyed. This doesn’t sound like a major reordering of priorities is being called for.

However, changes in technology and changes in how organizations do business should lead to rethinking the question of what’s mission-critical. Springboard Research suggests that virtualization will challenge the traditional mission-critical computing model. Typically organizations created silos of technologies to support different applications within the data center. Each mission-critical application was accompanied by a Level 1 (urgent) backup and recovery plan and a data protection strategy. Organizations strived for 99.999% availability in their most mission-critical systems although most settle for something closer to 99.9%.

Virtualization, which forms the foundation of cloud computing, is growing as a central element in the enterprise computing infrastructure. In the process, according to Springboard, it will contribute to breaking down boundaries between traditional computing silos, including legacy mission critical infrastructure and IT processes. It also will contribute to the solution; the ease of moving virtual machines across the network and the speed at which they can be restarted elsewhere provides a new layer of availability protection.

At the same time technologies like cloud computing and social media increasingly are changing the way organizations conduct business. For example, social media is giving rise to new customer facing strategies and systems. IT increasingly has to treat them carefully in terms of compliance and e-discovery. Should these customer facing systems also now be treated as mission-critical?

Ideally, cloud computing could help organizations protect mission-critical systems. The cloud may allow IT to more cost-effectively maintain backup systems and data on a standby basis or, through virtualization, more easily reallocate and even relocate IT resources. In the event of a failure of a mission-critical system, IT could quickly fire up the standby system and bring it online. This is not very different from how key mission-critical systems are protected today except that through the economies of the shared resources of the cloud it may be less costly to set up and maintain such protection.

Maybe the most mission-critical of current technologies today is email. How long could your organization function without email? What’s your plan should it fail?

, , , , , , ,

Leave a comment

Ways to Lower IT Costs

With the end of NEON’s zPrime, mainframe users lost an effective way to lower costs. And make no mistake about it; zPrime was effective in lowering costs. A data center manager in France told BottomlineIT that zPrime saved his company almost 1 billion Euros each year.

There was no magic to how zPrime achieved these savings. Mainframe software licensing costs and various other charges are reduced when the processing is handled by a special CPU that is treated differently when calculating licensing costs than if it was the general processor. The zPrime trick simply expanded the range of workloads that could be run on special processors far beyond what IBM approved. No surprise that IBM shut it down.

Every IT shop wants to reduce costs, especially these days. There are a number of ways to do so. Again, no magic. They involve reviving well known practices that many organizations have gotten away from in recent years.

Start by negotiating better software licensing deals. Many IT managers believe they already negotiate the best prices from software vendors. Repeated studies by Minneapolis-based ISAM, however, show that not to be the case.  When looking at the software tactics of best-in-class IT shops ISAM found considerable variation in software vendor pricing, and many shops simply don’t get the best deals.

When shopping for best software pricing, make sure to consider open source options too. Open source software, even with the various fees involved, costs less than conventional software licensing.

While you’re at it, check out the Software-as-a-Service (SaaS) options. Particularly for small and midsize organizations, SaaS may offer substantial savings over on-premise software licensing. The savings come from the economies of scale and from being a shared service.

Another option for reducing software costs is application performance management (APM). Where software is licensed based on the processor, anything that minimizes CPU consumption can save money. For these situations, APM revolves around proven best practices to minimize CPU resource consumption, especially during peak times. It involves both rescheduling when applications run and optimizing the code to run more efficiently.

“APM starts with profiling and understanding the way your applications use mainframe resources as they run—especially CPU. It helps determine whether they really need all the resources they are using and with this information you can then make focused tuning efforts in specific areas of software code within the applications and especially the database calls, which tend to use a lot of resources. It can reduce the CPU requirements to run your applications by an enormous percentage,” explains Philip Mann, a principal consultant at Macro 4, an APM consulting firm and tool provider.

Using the Macro 4 approach and tools, British retailer Tesco was able to reduce MIPS consumption 10-15% in one project, which allowed it to avoid purchasing extra CPU capacity. The Macro 4 tool enabled Tesco to identify opportunities where changes to databases, systems software, and applications could generate CPU savings.

Finally, organizations are trying to reduce IT costs through consolidation based on server virtualization. Some recent studies, however, suggest that many organizations are not getting the savings they expected from virtualization.  Although the potential for serious savings is still there it just may take a little more effort to realize them.

A recent survey by CA Technologies on the state of IT automation shows that 60% of managers at midsize and large enterprises are disappointed in virtualization’s ability to deliver savings. The survey quotes one respondent: “Virtualization is a bean counter’s dream, but it can be an operational nightmare.” The respondent, a senior IT manager, continued: “Change management is a huge overhead, as any changes need to be accepted by all applications and users sharing the same virtualization kit. While many organizations are seeing benefits from virtualization, such as reduced hardware spending and improved server utilization, these benefits often get overshadowed by the lack of productivity improvements in data center staffing and operations.”

The key to achieving virtualization savings is automation. The CA survey shows a direct correlation between IT service automation in a virtualized environment and cost-savings. For example, 44% of survey respondents who said most of their server provisioning processes are automated report they have significantly reduced costs through virtualization. Conversely, 48% of those who said the complexities of virtualization have introduced new costs also said, not surprisingly, most of their server provisioning processes still are manual.

OK, none of these techniques, except maybe the virtualization/automation combination, will likely save you $1 billion a year. But, when budgets are tight any savings will help.

, , , , , , ,

Leave a comment