Archive for October, 2011

One Version of the Truth

Every executive has experienced dueling spreadsheets. Rival business advocates present compelling spreadsheets. The arguments are clear and persuasive, backed by an impressive array of quantitative data.

On closer inspection, however, the dueling spreadsheets reveal disturbing inconsistencies in the data. Overall 3Q sales for Region 2 are significantly different; customer data in one points in a different direction than the other. Even data about the company’s products are different. How can that be?

Enterprises struggle to gain a consistent, shareable and accurate single version of data across their enterprises—a single version of the truth—according to Gartner’s latest Magic Quadrants for Master Data Management. Yet, achieving and maintaining a single, semantically consistent version of master data is a critical capability that supports many business drivers, from sales to compliance.

Master data management (MDM) is the process that tries to—you pick the word—(enforce), (impose), (enable), (manage) consistent data throughout your organization. Its goal is a single version of the truth in data terms.

This is not easy. There are technological, organizational, cultural, and procedural challenges involved. Any of these can undermine the effort. Further complicating the effort are the staff, including executives and managers who have a vested interest in ensuring their particular version of the truth prevails regardless of the actual data.

IBM, a leader in MDM along with Oracle, defines master data as common data about customers, suppliers, partners, products, materials, accounts and other critical entities that is commonly stored and replicated across IT systems. Master data is high-value, core information used to support critical business processes. It sits at the heart of every business transaction, application, report, and decision. IBM provides a good overview of MDM here.

From a technical standpoint, there are numerous challenges to MDM, starting with the sheer multitude of applications and systems that create and store data in their own ways. And then the data often is incomplete or missing. It may be incorrectly copied and distributed. Sometimes human errors produce inaccurate data and those mistakes ripple through the system uncorrected. Integration always is a challenge. With data growing exponentially and organizations riddled with non-integrated application silos, it is not surprising that a single version of the truth is so hard to come by.

Or organizations will have diverse application and information management portfolios, with fragments of often inaccurate, incomplete, and inconsistent data residing in various applications or different databases. There usually is no comprehensive system to maintain the single view or to manage the complete life cycle of the master data as it continually changes.

The result: executives make decisions based on inaccurate or inconsistent data. Often they may not even be aware the data is problematic. Or, they may feel there is nothing they can do about it. Yet, the ability to create, maintain, and leverage a single, trusted, shareable version of master data, notes Gartner, is an essential requirement for business processes and meaningful business intelligence (BI), not to mention the accompanying risk of noncompliance with whatever regulatory mandates apply to a particular piece of data.

This is where MDM comes in. MDM is a technology-enabled discipline in which business and the IT organization work together to ensure the uniformity, accuracy, stewardship, semantic consistency, and accountability of the enterprise’s official, shared master data assets.

Advanced automated technology is essential to avoid the inconsistencies and inaccuracies that plague corporate data today. Gartner identifies 12 players in its report on MDM for product data and nine players in its MDM for customer data report. IBM and Oracle are the clear Gartner leaders in each group. Earlier this week IBM introduced InfoSphere MDM v10 here. View Oracle’s MDM products here.

To put an end to dueling spreadsheets and inconsistent data you need MDM. And even then, it isn’t easy although IBM insists it has been able to lower the skill set required to implement MDM, accelerate the overall time to value, reduce risk by decreasing the time to go live with an MDM project, and lower the overall cost. That’s as good a start as any.

Advertisements

, , , , , ,

Leave a comment

Big Data is Coming

When an expert tells you that the amount of data is exploding it usually leads to a pitch to buy more storage capacity or tools to better manage the storage you have. While that’s probably not bad advice these days, it misses the most important point.

The point usually overlooked is that the data, when used right, has great business value. “Analyzing large data sets—so-called big data—will become a key basis of competition, underpinning new waves of productivity growth, innovation, and consumer surplus,” declares McKinsey in a recent research report. Click here for a copy.

IBM, which is in the business of selling technology to generate, capture, store, analyze, and apply data, has been pushing the idea of the business value of data for years.  Next week it is holding its annual Information on Demand conference in Las Vegas. There it will showcase its thought leadership and latest technologies for dealing with big data. Expect a number of product announcements.

Big data refers to the large amounts of unstructured data that is captured through RFID, myriad sensors and meters, from social media, and even digital video cameras. Analyzing this data can produce valuable insights that can be applied in variety of ways to achieve competitive advantage. Initially, big data was something only big utilities or giant retailers like Wal*Mart were concerned with.  But that’s no longer the case.

For example, one retailer took in-store and parking lot security surveillance video and applied different analytics and used the insights gained to change store layout to facilitate traffic flow and optimize staffing at cash registers. According to the McKinsey study, companies that tap the presence sensing and location data from mobile devices could capture $600 billion in surplus consumer spend. If applied to healthcare, big data could produce more than $300 billion in value every year, the researchers estimate. Two-thirds of that could go to reducing US health care expenditure by about eight percent.

The McKinsey researchers go on to identify five broad ways in which using big data can create value:

  1. Make information transparent and usable at much higher frequency and to a much deeper level of detail
  2. Collect more accurate and detailed performance information on everything from product inventories to sick days, thereby exposing variability and boosting performance
  3. Pursue ever-narrower segmentation of customers to more precisely tailor products or services for purposes of increasing spend and optimizing pricing
  4. Tap sophisticated analytics, including real-time analytics, to substantially improve and speed decision making
  5. Leverage big data to drive the development of the next generation of products and services, particularly through analysis to social media content

Technology vendors are scrambling to deliver big data now. In September EMC announced a major big data initiative here. Oracle addressed big data early in October at its Openworld conference here. HP’s announced acquisition of Autonomy signals its intention to compete in the big data space too.

All companies in all industries, regardless of size, need to take big data seriously. This isn’t, however, just an IT responsibility. IT can deploy the tools and technology and collect and manage the data, but the business managers need to get deeply involved.

How? Start with their data wish list—information the business would like to have—and then together IT and the business can figure out where and how the data can be acquired or what you need to generate, capture, meter, and measure to get the data. Next, the business will have to come up with the analytics to produce meaningful insights. This will go far beyond simple sorting or filtering. Finally the business must be willing to act on what it learns. That may be the hardest part.

, , , , ,

1 Comment

The Challenge of Managing Multi-Platform Virtualization

While virtualization has been experiencing widespread adoption over the past decade it was considered an x86-VMware phenomenon. Sure there are other hypervisors, but for most organizations VMware was synonymous with virtualization. Even on the x86 platform, Microsoft Hyper-V was the also ran. Of course, virtualization has been baked into the mainframe for decades, but most organizations only began to take notice with the rise of VMware.

Virtualization provides the foundation for cloud computing, and as cloud computing gains traction across all segments of the computing landscape virtualization increasingly is understood as a multi-platform and multi-hypervisor game. Today’s enterprise is likely to be widely heterogeneous. It will run virtualized systems on x86 platforms, Windows, Linux, Power, and System z. By the end of the year, expect to see both Windows and Linux applications running virtualized on x86, Power Systems, and the zEnterprise mainframe.

Welcome to the virtualized multi-platform, multi-hypervisor enterprise. While it brings benefits—choice, flexibility, cost savings—it also comes with challenges. The biggest of which is management complexity. Growing virtualized environments have to be tightly managed or they can easily spin out of control with phantom and rogue VMs popping up everywhere and gobbling system resources. The typical platform- and hypervisor-specific tools simply won’t do the trick. This will require tools to manage virtualization across the full range of platforms and hypervisors.

Not surprisingly, IBM, which probably has the most virtualized platforms and hypervisors of any vendor, also is the first with cross-platform, cross-hypervisor management in Systems Director’s newest version of VMControl, version 2.4, part of IBM’s System Director family of management tools. This is truly multi everything management. From a single console you control VMs running on x86 Windows, x86 Linux, and Linux on Power. One administrator can start, stop, move, and otherwise manage virtual machines, even across platforms. And it is agnostic as far as the hypervisor goes; it can handle VMware, Hyper V, and KVM.  It also integrates with Microsoft System Center Configuration Manager and VMware vCenter. (I’ve been told by IBM that it also will be able to manage VMs running on the zEnterprise platform soon after a few issues are resolved regarding the mainframe’s already robust virtualization management.)

The multi-platform VMControl 2.4 dovetails nicely with another emerging virtualization trend—open virtualization. In just a few months the Open Virtualization Alliance has grown from the initial four founders (IBM, Red Hat, Intel, and HP) to over 200 members. The open source KVM hypervisor being championed by the alliance handles both Linux and Windows workloads, allowing organizations to dodge yet another element of vendor lock-in. One organization already used that flexibility to avoid higher charges by running the open source hypervisor for a test and dev situation. That kind of open virtualization requires just the kind of multi-platform virtualization management VMControl 2.4 delivers.

Multi-platform is where enterprise virtualization has to go. Eventually BottomlineIT expects the other hypervisors to get there, but it may take a while.

, , , , , , , ,

Leave a comment

Findings: Verizon Data Breach Investigations Report

According to a study conducted by the Verizon RISK Team with cooperation from the U.S. Secret Service and the Dutch High Tech Crime Unit, a total of 3.8 million data records were compromised across 760 reported data breaches in 2010. Was yours one of them?

Actually, this is good news. The study indicated a significant decrease in the number of compromised records from the prior two years. The researchers attributed the declining trend of data breaches to the collaborative effort between the US Secret Service and the industry to combat computer cybercrimes and increased security awareness.

Last week, IBM also released results from its mid-year X-Force 2011 Trend and Risk Report, highlighting that public and private organizations around the world faced increasingly sophisticated, customized IT security threats in 2011.  The results demonstrate the rapidly changing security landscape characterized by high-profile attacks, growing mobile vulnerabilities and more sophisticated threats such as whaling, a form of phishing that focuses on a small targeted group within an organization.

As the X-Force report notes, the security environment is changing: the boundaries of business infrastructure are being extended or obliterated by the emergence of cloud, mobility, social business, big data and more. At the same time, the attacks are getting more sophisticated, often showing evidence of extensive intelligence collection and careful, patient, long term planning. The repercussions of these attacks are large enough to move security discussions out of technical circles and into the board room.

Paradoxically, there have been significant gains in the fight to secure the Internet this year with many vulnerability and attack statistics significantly improving as the Verizon Data Breach data suggests. The good guys may be winning some key battles,  but the fight is far from over. The bad guys are simply moving on to new battlefields, including smartphones and tablets. The rapid proliferation of these devices combined with a consolidation of operating systems has caused attackers to finally warm up to the opportunities these devices represent. As such, IBM X-Force research is predicting that exploits targeting vulnerabilities that affect mobile operating systems will more than double from 2010.

Computer forensic and IT security expert Peter Kiilu reviewed the key findings and learning points from the Verizon 2011 Data Breach Investigations Report. As part of the report, he suggests controls that companies can implement to significantly reduce the risk of data breach and the related financial losses.

Let’s look at just a few of key findings. For example, the victims of these breaches usually weren’t even aware they had lost data or experienced a breach until they were notified by a third party. That’s bad, especially if the party notifying you is a customer or a regulator. Not surprisingly, Kiilu noted that most of the victims subject to PCI-DSS had not achieved compliance. If you handle credit card information don’t shortchange PCI compliance.

Hacking and malware were the most common threat actions. In fact, the top four threat events all involved external agents hacking into and installing malware to compromise the confidentiality and integrity of the servers.

The goal of these attacks is to get data; this isn’t just joyriding through your systems. The three most common types of data misuse observed last year were embezzlement, skimming, and related fraud. The victims, by the way, were targets of opportunity rather than specifically chosen. You can probably conclude they were targeted because they were easy targets. As Kiilu noted, almost all the breaches were avoidable, without difficult or expensive corrective measures.

Many managers, especially CFOs, voice concern about cloud computing over security. While there are legitimate concerns about cloud security the study makes it clear that cloud computing and any technology specific to the cloud were not the main culprits behind the data breaches.

What were the main culprits? According to Kiilu, the problems revolved around giving up control of information assets and data and not controlling the associated risk. In an upcoming report, BottomlineIT will take up Kiilu’s recommended defensive actions.

, , , , ,

Leave a comment