Posts Tagged EMC

Fueled by SMAC Tech M&A Activity to Heat Up

Corporate professional services firm BDO USA  polled approximately 100 executives of U.S. tech outfits for its 2014 Technology Outlook Survey and found them firm in the belief that mergers and acquisitions in tech would either stay at the same rate (40%) or increase over last year (43%). And this isn’t a recent phenomenon.

M&A has been widely adopted across a range of technology segments as not only the vehicle to drive growth but, more importantly, to remain at the leading edge in a rapidly changing business and technology environment that is being spurred by cloud and mobile computing. And fueling this M&A wave is SMAC (Social, Mobile, Analytics, Cloud).

SMAC appears to be triggering a scramble among large, established blue chip companies like IBM, EMC, HP, Oracle, and more to acquire almost any promising upstart out there. Their fear: becoming irrelevant, especially among the young, most highly sought demographics.  SMAC has become the code word (code acronym, anyway) for the future.

EMC, for example, has evolved from a leading storage infrastructure player to a broad-based technology giant driven by 70 acquisitions over the past 10 years. Since this past August IBM has been involved in a variety of acquisitions amounting to billions of dollars. These acquisitions touch on everything from mobile networks for big data analytics and mobile device management to cloud services integration.

Google, however, probably should be considered the poster child for technology M&A. According to published reports, Google has been acquiring, on average, more than one company per week since 2010. The giant search engine and services company’s biggest acquisition to date has been the purchase of Motorola Mobility, a mobile device (hardware) manufacturing company, for $12.5 billion. The company also purchased an Israeli startup Waze  in June 2013 for almost $1 billion.  Waze is a GPS-based application for mobile phones and has brought Google a strong position in the mobile phone navigation business, even besting Apple’s iPhone for navigation.

Top management has embraced SMAC-driven M&A as the fastest, easiest, and cheapest way to achieve strategic advantage through new capabilities and the talent that developed those capabilities. Sure, the companies could recruit and build those capabilities on their own but it could take years to bring a given feature to market that way and by then, in today’s fast moving competitive markets, the company would be doomed to forever playing catch up.

Even with the billion-dollar and multi-billion dollar price tags some of these upstarts are commanding strategic acquisitions like Waze, IBM’s SoftLayer, or EMC’s XtremeIO have the potential to be game changers. That’s the hope, of course. But it can be risky, although risk can be managed.

And the best way to manage SMAC merger risk is to have a flexible IT platform that can quickly absorb those acquisitions and integrate and share the information and, of course, a coherent strategy for leveraging the new acquisitions. What you need to avoid is ending up with a bunch of SMAC piece parts that don’t fit together.

, , , , , , , , , , , , , , , , , ,

Leave a comment

Can Flash Replace Hard Disk for Enterprise Storage?

Earlier this month IBM announced a strategic initiative, the IBM FlashSystem, to drive Flash technology deeper into the enterprise. The IBM FlashSystem is a line of all-Flash storage appliances based on technology IBM acquired from Texas Memory Systems.

IBM’s intent over time is to replace hard disk drive (HDD) for enterprise storage with flash. Flash can speed the response of servers and storage systems to data requests from milliseconds to microseconds – an order of magnitude improvement. And because it is all electronic—nothing mechanical involved—and being delivered cost-efficiently at even petabyte scale, it can remake data center economics, especially for transaction-intensive and IOPS-intensive situations.

For example, the IBM FlashSystem 820 is the size of a pizza box but 20x faster than spinning hard drives and can store up to 24 TB of data. The entry-level IBM FlashSystem, with an approximate street price for an entry 820 (10 TB usable RAID 5) runs about $150K (or $15 per gigabyte).  At the high end, you can assemble a 1 PB FlashSystem that fits in one rack and delivers 22 million IOs per second (IOPS). You would need 630 racks of high capacity hard disk drives or 315 racks of performance optimized disk to generate an equal amount of IOPS.

For decades storage economics has been driven by the falling cost per unit of storage, and storage users have benefited from a remarkable ride down the cost curve thanks to Moore’s law. The cost per gigabyte for hard disk drive (HDD) has dropped steadily, year after year. You now can buy a slow, USB-connected 1TB consumer-grade disk drive for under $89!

With low cost per gigabyte storage, storage managers could buy cheap gigabytes, which enabled backup to disk and long-term disk archiving. Yes, tape is even cheaper on a cost per gigabyte basis but it is slow and cumbersome and prone to failure. Today HDD rules.

Silicon-based memory, however, has been riding the Moore’s Law cost slope too. In the last decade memory has emerged as a new storage media through memory-based storage in the form of RAM, DRAM, cache, flash, and solid state disk (SSD) technology. Prohibitively expensive to use for mass storage initially, the magic of Moore’s law combined with other technical advances and mass market efficiencies have made flash something to think about seriously for enterprise production storage.

The IBM FlashSystem changes data center economics. One cloud provider reported deploying 5TB in 3.5 inches of rack space compared to deploying 1300 hard disks to achieve 400k IOPS and it did so at one-tenth the cost.  Overall, Wikibon reports an all Flash approach will lower total system costs by 30%; that’s $4.9m for all flash compared to $7.1m for hard disk.  Specifically, it reduced software license costs 38%, required 17% few servers, and lowered environmental costs by 74% and operational support costs by 35%.  At the same time it boosted storage utilization by 50% while reducing maintenance and simplifying management with corresponding labor savings. Combine flash with compression, deduplication, and thin provisioning and the economics look even better.

For data center managers, this runs counter to everything they learned about the cost of storage. Traditional storage economics starts with the cost of hard disk storage being substantially less than the cost of SSD or Flash on a $/GB basis. Organizations could justify SSD, only by using it in small amounts to tap its sizeable cost/IOPS advantage for IOPS-intensive workloads.

Any HDD price/performance advantage is coming to an end. As reported in PC World, Steve Mills, IBM Senior Vice President noted:  Right now, generic hard drives cost about $2 per gigabyte. An enterprise hard drive will cost about $4 per gigabyte, and a high-performance hard drive will run about $6 per gigabyte. If an organization stripes its data across more disks for better performance, the cost goes up to about $10 per gigabyte. In some cases, where performance is critical, hard-drive costs can skyrocket to $30 or $50 per gigabyte.

From a full systems perspective (TCO for storage) Flash looks increasingly competitive. Said Ambuj Goyal, General Manager, Systems Storage, IBM Systems & Technology Group: “The economics and performance of Flash are at a point where the technology can have a revolutionary impact on enterprises, especially for transaction-intensive applications.” But this actually goes beyond just transactions. Also look at big data analytics workloads, technical computing, and any other IOPS-intensive work.

Almost every major enterprise storage vendor—EMC, NetApp, HP, Dell, Oracle/Sun—is adding SSD to their storage offerings. It is time to start rethinking your view of storage economics when flash can replace HDD and deliver better performance, utilization, and reliability even while reducing server software licensing costs and energy bills.

, , , , , , , , , , , , ,

Leave a comment

New Products Reduce Soaring Storage Costs

The latest EMC-sponsored IDC Digital Universe study projects that the digital universe will reach 40 zettabytes (ZB) by 2020, a 50-fold growth from the beginning of 2010!! Do you wonder why your storage budget keeps increasing? And the amount of data that requires protection—backup on some sort—is growing faster than the digital universe itself.  This clearly is not good for the organization’s storage budget.

Worse yet, from a budget standpoint, the investment on IT hardware, software, services, telecommunications and staff that could be considered the infrastructure of the digital universe will grow by 40% between 2012 and 2020. Investment in storage management, security, big data, and cloud computing will grow considerably faster.

Last July BottomlineIT partially addressed this issue with a piece of reducing your storage debt, here. Recent products from leading storage players promise to help you do it more easily.

Let’s start with EMC, whose most recent storage offering is the VMAX 40K Enterprise Storage System. Enterprise-class, it promises to deliver up to triple the performance and more than twice the usable capacity of any other offering in the Industry, at least that was the case seven months ago. But things change fast.

With the VMAX comes an enhanced storage tool that simplifies and streamlines storage management, enabling fewer administrators to handle more storage. EMC also brings a revamped storage tiering tool, making it easier to move data to less costly and lower performing storage when appropriate. This allows you to conserve your most costly storage for the data most urgently requiring it.

HP, which has been struggling in general through a number of self-inflicted wounds, continues to offer robust storage products. Recognizing that today’s storage challenges—vastly more data, different types of data, and more and different needs for the data—require new approaches HP revamped its Converged Storage architecture. According to an Evaluator Group study many organizations only use 30% of their physical disk capacity, effectively wasting the rest while forcing their admins to wrestle with multiple disparate storage products.

The newest HP storage products address this issue for midsize companies. They include the HP 3PAR StoreServ7000, which offers large enterprise-class storage availability and quality-of-service features at a midrange price point.  HP StoreAll, a scalable platform for object and file data access that provides a simplified environment for big data retention and cloud storage while reducing the need for additional administrators or hardware.  Finally, it introduced the HP StoreAll Express Query, a special data appliance that allows organizations to conduct search queries orders of magnitude faster than previous file system search methods. This expedites informed decision-making based on the most current data.

IBM revamped its storage line too for the same reasons.  Its sleekest offering, especially for midsize companies, is the Storwize V7000 Unified, which handles block and file storage.  It also comes as a blade for IBM’s hybrid (mixed platforms) PureSystems line, the Storwize Flex V7000. Either way it includes IBM’s Real-Time Compression (RtC).

RtC alone can save considerable money by reducing the amount of storage capacity an organization needs to buy, by delaying the need to acquire more storage as the business grows, and by speeding performance of storage-related functions. While other vendors offer compression, none can do what RtC does; it compresses active (production) data and with no impact on application performance. This is an unmatched and valuable achievement.

On top of that the V7000 applies built-in expertise to simplify storage management. It enables an administrator who is not skilled in storage to perform almost all storage tasks quickly, easily, and efficiently. Fewer lesser-skilled administrators can handle increasingly complex storage workloads and perform sophisticated storage tasks flawlessly.  This substantially reduces the large labor cost associated with storage.

NetApp also is addressing the same storage issues for midsize companies through its NetApp FAS3200 Series. With a new processor and memory architecture it promises up to 80% more performance, 100% more capacity, non-disruptive operations, and industry-leading storage efficiency.

Data keeps growing, and you can’t NOT store it. New storage products enable you to maximize storage utilization, optimize the business value from data, and minimize labor costs.

, , , , , , , , ,

Leave a comment

PaaS Gains Cloud Momentum

Guess you could say Gartner is bullish on Platform-as-a-Service (PaaS). The research firm declares: PaaS is a fast-growing core layer of the cloud computing architecture, but the market for PaaS offerings is changing rapidly.

The other layers include Software-as-a-Service (SaaS) and Infrastructure-as-a-Service (IaaS) but before the industry build-out of cloud computing is finished (if ever), expect to see many more X-as-a-Service offerings. Already you can find Backup-as-a-Service (BaaS). Symantec, for instance, offers BaaS to service providers, who will turn around and offer it to their clients.

But the big cloud action is around PaaS. Late in November Red Hat introduced OpenShift Enterprise, an enterprise-ready PaaS product designed to be run as a private, public or hybrid cloud. OpenShift, an open source product, enables organizations to streamline and standardize developer workflows, effectively speeding the delivery of new software to the business.

Previously cloud strategies focused on SaaS, in which organizations access and run software from the cloud. Salesforce.com is probably the most familiar SaaS provider. There also has been strong interest in IaaS, through which organizations augment or even replace their in-house server and storage infrastructure with compute and storage resources from a cloud provider. Here Amazon Web Services is the best known player although it faces considerable competition that is driving down IaaS resource costs to pennies per instance.

PaaS, essentially, is an app dev/deployment and middleware play. It provides a platform (hence the name) to be used by developers in building and deploying applications to the cloud. OpenShift Enterprise does exactly that by giving developers access to a cloud-based application platform on which they can build applications to run in a cloud environment. It automates much of the provisioning and systems management of the application platform stack in a way that frees the IT team to focus on building and deploying new application functionality and not on platform housekeeping and support services. Instead, the PaaS tool takes care of it.

OpenShift Enterprise, for instance, delivers a scalable and fully configured application development, testing and hosting environment. In addition, it uses Security-Enhanced Linux (SELinux) for reliable security and multi-tenancy. It also is built on the full Red Hat open source technology stack including Red Hat Enterprise Linux, JBoss Enterprise Application Platform, and OpenShift Origin, the initial free open source PaaS offering. JBoss Enterprise Application Platform 6, a middleware tool, gives OpenShift Enterprise a Java EE 6-certified on-premise PaaS capability.  As a multi-language PaaS product, OpenShift Enterprise supports Java, Ruby, Python, PHP, and Perl. It also includes what it calls a cartridge capability to enable organizations to include their own middleware service plug-ins as Red Hat cartridges.

Conventional physical app dev is a cumbersome process entailing as many as 20 steps from idea to deployment. Make it a virtual process and you can cut the number of steps down to 14; a small improvement. As Red Hat sees it, the combination of virtualization and PaaS can cut that number of steps to six; idea, budget, code, test, launch, and scale. PaaS, in effect, shifts app dev from a craft undertaking to an automated, cloud-ready assembly line. As such, it enables faster time to market and saves money.

Although Red Hat is well along in the PaaS market and the leader in open source PaaS other vendors already are jumping in and more will be joining them. IBM has SmartCloud Application Services as its PaaS offering.  Oracle offers a PaaS product as part of the Oracle Cloud Platform. EMC offers PaaS consulting and education but not a specific technology product.  When HP identifies PaaS solutions it directs you to its partners. A recent list of the top 20 PaaS vendors identifies mainly smaller players, CA, Google, Microsoft, and Salesforece.com being the exceptions.

A recent study by IDC projects the public cloud services market to hit $98 billion by 2016. The PaaS segment, the fastest growing part, will reach about $10 billion, up from barely $1 billion in 2009. There is a lot of action in the PaaS segment, but if you are looking for the winners, according to IDC, focus on PaaS vendors that provide a comprehensive, consistent, and cost effective platform across all cloud segments (public, private, hybrid). Red Hat OpenShift clearly is one; IBM SmartCloud Application Services and Microsoft Azure certainly will make the cut. Expect others.

, , , , , , , , , , , ,

Leave a comment

Speed Time to Big Data with Appliances

Hadoop will be coming to enterprise data centers soon as the big data bandwagon picks up stream. Speed of deployment is crucial. How fast can you deploy Hadoop and deliver business value?

Big data refers to running analytics against large volumes of unstructured data of all sorts to get closer to the customer, combat fraud, mine new opportunities, and more. Published reports have companies spending $4.3 billion on big data technologies by the end of 2012. But big data begets more big data, triggering even more spending, estimated by Gartner to hit $34 billion for 2013 and over a 5-year period to reach as much as $232 billion.

Most enterprises deploy Hadoop on large farms of commodity Intel servers. But that doesn’t have to be the case. Any server capable of running Java and Linux can handle Hadoop. The mainframe, for instance, should make an ideal Hadoop host because of the sheer scalability of the machine. Same with IBM’s Power line or the big servers from Oracle/Sun and HP, including HP’s new top of the line Itanium server.

At its core, Hadoop is a Linux-based Java program and is usually deployed on x86-based systems. The Hadoop community has effectively disguised Hadoop to speed adoption by the mainstream IT community through tools like SQOOP, a tool for importing data from relational databases into Hadoop, and Hive, which enables you to query the data using a SQL-like language called HiveQL. Pig is a high-level platform for creating the MapReduce programs used with Hadoop. So any competent data center IT group could embark on Hadoop big data initiatives.

Big data analytics, however, doesn’t even require Hadoop.  Alternatives like Hortonworks Data Platform (HDP), MapR, IBM GPFS-SNC (Shared Nothing Cluster), Lustre, HPCC Systems, Backtype Storm (acquired by Twitter), and three from Microsoft (Azure Table, Project Daytona, LINQ) all promise big data analytics capabilities.

Appliances are shaping up as an increasingly popular way to get big data deployed fast. Appliances trade flexibility for speed and ease of deployment. By packaging hardware and software pre-configured and integrated they make it ready to run right out of the box. The appliance typically comes with built-in analytics software that effectively masks big data complexity.

For enterprise data centers, the three primary big data appliance players:

  • IBM—PureData, the newest member of its PureSystems family of expert systems. PureData is delivered as an appliance that promises to let organizations quickly analyze petabytes of data and then intelligently apply those insights in addressing business issues across their organization. The machines come as three workload-specific models optimized either for transactional, operational, and big data analytics.
  • Oracle—the Oracle Big Data Appliance is an engineered system optimized for acquiring, organizing, and loading unstructured data into Oracle Database 11g. It combines optimized hardware components with new software to deliver a big data solution. It incorporates Cloudera’s Apache Hadoop with Cloudera Manager. A set of connectors also are available to help with the integration of data.
  • EMC—the Greenplum modular data computing appliance includes Greenplum Database for structured data, Greenplum HD for unstructured data, and DIA Modules for Greenplum partner applications such as business intelligence (BI) and extract, transform, and load (ETL) applications configured into one appliance cluster via a high-speed, high-performance, low-latency interconnect.

 And there are more. HP offers HP AppSystem for Apache Hadoop, an enterprise-ready appliance that simplifies and speeds deployment while optimizing performance and analysis of extreme scale-out Hadoop workloads. NetApp offers an enterprise-class Hadoop appliance that may be the best bargain given NetApp’s inclusive storage pricing approach.

As much as enterprise data centers loathe deploying appliances, if you are under pressure to get on the big data bandwagon fast and start showing business value almost immediately appliances will be your best bet. And there are plenty to choose from.

, , , , , , , , , , , , ,

Leave a comment

EMC Introduces New Mainframe VTL

Competition suddenly is heating up at the top of the mainframe storage world. EMC introduced the high end DLm8000, the latest in its family of VTL products. This one is aimed for large enterprise mainframe environments and promises to ensure consistency of data at both production and recovery sites and provide the shortest possible RPO and RTO for critical recovery operations.

It is built around EMC VMAX enterprise storage and its SRDF replication and relies on synchronous replication to ensure immediate data consistency between the primary and target storage by writing the data simultaneously at each. Synchronous replication addresses the potential problem latency mismatch that occurs with the usual asynchronous replication, where a lag between writes to the primary and to the backup target storage can result in inconsistent data.

Usually this mismatch exists for a brief period. EMC suggests the issue, especially for large banks and financial firms—its key set of mainframe target customers—is much more serious. Large financial organizations with high transaction volume, EMC notes, have historically faced recovery challenges because their mainframe tape and DASD data at production and secondary sites were never fully in synch.  As such, recovery procedures often slowed until the differences between the two data sets were resolved, which slowed the resulting failover.  This indeed may be a real issue but for only a small number of companies, specifically those that need an RTO and RPO of just about zero.

EMC used the introduction of the DLm8000 to beat up tape backup in general. Physical tape transportation by third party records management companies, EMC notes, hinders recovery efforts by reducing what it refers to as the granularity of RPOs while dramatically increasing the RTO.  In addition, periodic lack of tape drive availability for batch processing and for archive and backup applications can impair SLAs, further increasing the risks and business impact associated with unplanned service interruptions. That has been long recognized, but remember EMC is a company that sells disk, not tape storage, and ran a Tape Sucks campaign after its purchase of Data Domain. What would you expect them to say?

The DLm8000 delivers throughput of up to 2.7 GB/s, which it claims is 2.5x the performance of its nearest competitor. BottomlineIT can’t validate that claim, but EMC does have a novel approach to generating the throughput. The DLm8000 is packed with eight Bus-Tech engines (acquired in its acquisition of Bus-Tech in Nov. 2010) and it assigns two FICON connections to each engine for a total of 16 FICON ports cranking up the throughput. No surprise it can aggregate that level of throughput.

EMC has not announced pricing for the DLm8000. The device, however, is the top of its VTL lineup and VMAX enterprise storage tops its storage line. With high throughput and synchronous replication, this product isn’t going to be cheap. However, if you need near zero RPO and RTO then you have only a few choices.

Foremost among those choices should be the IBM TS7700 family, particularly the 7740 and the 7720. Both of these systems provide VTL connectivity. The TS7700 avoids the latency mismatch issue by using a buffer to get the most optimal write performance and then periodically synch primary and target data. “Synchronous as EMC does it for VTL is overkill,” says an IBM tape manager. The EMC approach essentially ignores the way mainframe tape has been optimized.

Among the other choices are the Oracle Virtual Storage Manager and Virtual Library Extension. Oracle uses StorageTek tape systems. The Oracle approach promises to improve tape drive operating efficiencies and lower TCO by optimizing tape drive and library resources through a disk-based virtual tape architecture. HDS also has a mainframe tape backup and VTL product that uses Luminex technology.

EMC is a disk storage company and its DLm8000 demonstrates that. When it comes to backup, however, mainframe shops are not completely averse to tape. Disk-oriented VTL has some advantages but don’t expect mainframe shops to completely abandon tape.

In other storage news, IBM recently announced acquiring Texas Memory Systems (TMS), a long established (1978) Texas company that provides solid state memory to deliver significantly faster storage throughput and data access while consuming less power. TMS offers its memory as solid state disk (SSD) through its RamSan family of shared rackmount systems and Peripheral Component Interconnect Express (PCIe) cards. SSD may be expensive on a cost per gigabyte basis but it blows away spinning hard disk on a cost per IOPS.

Expect to see IBM to use TMS’s SSD across its storage products as one of its key future storage initiatives. Almost every other storage vendor is incorporating some form of SSD or Flash into its storage architecture.

Finally, IBM today introduced the latest rev of its top end mainframe. The new machine, the zEnterprise EC12 (zEC12) delivers more performance and capacity that its predecessor, the z196. Both machines support multi-platform hybrid computing.

With a 5.5 Ghz core processor, up from 5.2 Ghz in the z196, and an increase in the number of cores per chip (from 4 to 6) the zEC12 indeed is faster. Supporting 101 configurable engines compared to 80 on the z196, IBM calculates the zEC12 delivers 50% more total capacity in the same footprint.

, , , , ,

1 Comment

Reduce Your Storage Technology Debt

The idea of application debt or technology debt is gaining currency. At Capgemini, technology debt is directly related to quality. The consulting firm defines is as “the cost of fixing application quality problems that, if left unfixed, put the business at serious risk.” To Capgemini not every system clunker adds to the technical debt, only those that are highly likely to cause business disruption. In short, the firm does not include all problems, just the serious ones.

Accenture’s Adam Burden, executive director of the firm’s Cloud Application and Platform Service and a keynoter at Red Hat’s annual user conference in Boston June brought up technology debt too. You can watch the video of his presentation here.

Does the same idea apply to storage? You could define storage debt as storage technologies, designs, and processes that over time hinder the efficient delivery of storage services to the point where it impacts business performance. Then, using this definition, a poorly architected storage infrastructure no matter how well it solved the initial problem may create a storage debt that eventually will have to be repaid or service levels would suffer.

Another thing with technology debt; there is no completely free lunch. Every IT decision, including storage decisions, even the good ones, eventually adds to the technology debt at some level. The goal is to identify those storage decisions that incur the least debt and avoid the ones that incur the most.

For example, getting locked into a vendor or a technology that has no future obviously will create a serious storage debt. But what about a decision to use SSD to boost IOPS as opposed to a decision to throw more spindles at the IOPS challenge? Same with B2D. Does it create more or less storage debt than tape backup?

Something else to consider:  does storage debt result only from storage hardware, software, and firmware decisions or should you take into account the skills and labor involved? You might gradually realize you have locked yourself into a staff with obsolete skills.  Then what; fire them all, undertake widespread retraining, quit and leave it for the next manager?

And then there is the cloud. The cloud today must be factored into every technology discussion. How does storage in the cloud impact storage debt? It certainly complicates the calculus.

It’s easy to accumulate storage debt but it’s also possible to lower your organization’s storage debt. Here are some possibilities:

  • Aim for simple storage architectures
  • Standardize on a small set of proven products from solid vendors
  • Virtualize storage
  • Maximize the use of tools with a GUI to simplify management
  • Standardize on products certified in compliance with SMI-S for broader interoperability
  • Selectively leverage cloud storage
  • Use archiving, deduplication, thin provisioning  to minimize the amount of data you store

Sticking with one or two of the leading storage vendors–IBM, EMC, HP, Dell, NetApp, Symantec, Hitachi, StorageTek–is a generally safe bet, but it too can add to your storage debt.

You’re just not going to eliminate the storage debt, especially not in an environment where the demand for more and different storage is increasing every year.  The best you can strive for is to minimize the storage debt and restrain its growth.

, , , , , , , , , ,

1 Comment

HP and Dell Lead Latest Rush to Cloud Services

HP last week made its first public cloud services available as a public beta. This advances the company’s Converged Cloud portfolio as the company delivers an open source-based public cloud infrastructure designed to enable developers, independent software vendors (ISVs) and enterprises of all sizes to build the next generation of web applications.

These services, HP Cloud Compute, Storage, and HP Cloud Content Delivery Network, now will be offered through a pay-as-you-go model. Designed with OpenStack technology, the open-sourced-based architecture avoids vendor lock-in, improves developer productivity, features a full stack of easy-to-use tools for faster time to code, provides access to a rich partner ecosystem, and is backed by personalized customer support.

Last week Dell also joined the cloud services rush with an SAP cloud services offering. Although Dell has been in the services business at least since its acquisition of Perot Systems a few years back services for SAP and the cloud, indeed are new, explained Burk Buechler, Dell’s service portfolio director.

Dell offers two cloud SAP services. The first is the Dell Cloud Starter Kit for SAP Solutions, which helps organizations get started on their cloud journey quickly by providing customers with 60-day access to Dell’s secure cloud environment with a compute power equivalent to 8,000 SAP Application Performance Standard (SAPS) unit of measure and is coupled with Dell’s consulting, application, and infrastructure services in support of SAP solutions.

The second is the Dell Cloud Development Kit for SAP Solutions, which provides access to 32,000 SAPS of virtual computing capacity to deploy development environments for more advanced customers who need a rich development landscape for running SAP applications. This provides a comprehensive developer environment with additional capabilities for application modernization and features industry-leading Dell Boomi technology for rapid cross-application integration.

Of the latest two initiatives, HP’s is the larger. Nearly 40 companies have announced their support for HP Cloud Services, from Platform-as-a-Service (PaaS) partners to storage, management and database providers. The rich partner ecosystem provides customers with rapid access to an expansive suite of integrated cloud solutions that offer new ways to become more agile and efficient. The partner network also provides a set of tools, best practices and support to help maximize productivity on the cloud. This ecosystem of partners is a step along the path to an HP Cloud Services Marketplace, where customers will be able to access HP Cloud Services and partner solutions through a single account.

Of course, there are many other players in this market. IBM staked out cloud services early with a variety of IBM SmartCloud offerings. Other major players include Oracle., Rackspace, Amazon’s Elastic Compute Cloud (EC2), EMC, Red Hat, Cisco, NetApp, and Microsoft.  It is probably safe to say that eventually every major IT vendor will offer cloud services capabilities. And those that don’t will have partnerships and alliances with those who do.

Going forward every organization will include a cloud component as some part of their IT environment. For some, it will represent a major component; for others cloud usage will vary as business and IT needs change. There will be no shortage of options, something to fit every need.

, , , , , , , , , ,

Leave a comment

Big Data Analytics Defines Top Performers

A survey of over 1100 executives by the IBM Center for Applied Insights showed that organizations making extensive use of analytics experienced up to 1.6x the revenue growth, 2.0x EBITDA growth, and a 2.5x stock price appreciation compared to their peers.  And what they are analyzing is Big Data, a combination of structured data found in conventional relational databases and unstructured data pouring in from widely varied sources.

Big Data is growing fast.  By 2015 the digital universe, as forecast by IDC, will hit 8 zettabytes (ZB). (1ZB = 1021 bytes, one sextillion bytes). Adding to the sheer volume is the remarkable velocity at which data is created.  Every minute 600 new blog posts are published and 34,000 Twitter tweets are sent. If some of that data is about your organization, brand, products, customers, competitors, or employees wouldn’t you want to know?

Big data involves both structured and unstructured data.  Traditional systems contain predominantly structured data. Unstructured data comes from general files; from smart phones and mobile devices; from social media like Twitter, Facebook, and others; from RFID tags and other sensors and meters; and even from video cameras. All can be valuable to organizations in particular contexts.

Large organizations, of course, can benefit from Big Data, but midsize and small businesses can too.  A small chain of pizza shops needs to know the consumer buzz about their pizza as much as Domino’s.

IBM describes a 4-step process for tapping the value of Big Data: align, anticipate, act, and learn. The goal is to make the right decision at the point of maximum impact. That might be when the customer is on the phone with a sales agent or when the CFO is about to negotiate the details of an acquisition.

Align addresses the need to identify your data sources and plan how you are going to collect and organize the data. It will involve your structured databases as well as the wide range of enterprise content from unstructured sources. Anticipate addresses data analytics and business intelligence with the goal of predicting and shaping outcome. It focuses on identifying and analyzing trends, making hypotheses, and testing predictions. Act is the part where you put the data into action, whether it is making the best decision or taking advantage of a new pattern you have uncovered. But it doesn’t stop there. Another payoff from Big Data comes from the ability to learn, for the purpose of refining your analytics and identifying new patterns based on subsequent data.

Big Data needs to be accompanied by appropriate tools and technology. Earlier this month, IBM introduced three task-specific Smarter Analytics Signature Solutions. The first addresses anti-fraud, waste, and abuse by using sophisticated analytics to recommend the most effective remedy for each case. For example it might recommend a different letter requesting payment in one case but suggest a full criminal investigation in another.

The second Signature Solution focuses on next-best-action.  This looks at the various data uses real-time analytics to predict customer behavior and preferences and recommend the next best action to take with regard to a customer, such as to reduce churn or up-sell.

The third Signature Solution, dubbed CFO Performance Insight, works on a collection of complex and cross-referenced internal and external data sets using predictive analytics to deliver increased visibility and control of financial performance along with predictive insights and root-cause analyses. These are delivered via an executive-style dashboard.

IBM isn’t the only vendorr to jump on the Big Data bandwagon. EMC has put a stake into this market.  Oracle, which has been stalking IBM for years, also latched onto Big Data through Exalytics, its in-memory analytics product similar to IBM’s Netezza. Of course, small players like Cloudera, which early on staked out Hadoop, the key open source component of Big Data, also offer related products and services.

Big Data analytics will continue as an important issue for some years to come. This blog will return to it time and again.

, , , , , , ,

1 Comment

Predictive Analysis on the Mainframe

Real time, high volume predictive analysis has become a hot topic with the growing interest in Big Data. The consulting firm McKinsey addresses the growth of Big Data here. McKinsey’s gurus note that advancing technologies and their swift adoption are upending traditional business models. BottomlineIT also took up Big Data in October 2011.

With that in mind, IBM has been positioning the zEnterprise, its latest mainframe, for a key role in data analysis.  To that end, it acquired SPSS and Cognos and made sure they ran on the mainframe. The growing interest in Big Data and real-time data analytics fueled by reports like that above from McKinsey only affirmed IBM’s belief that as far as data analytics goes the zEnterprise is poised to take the spotlight. This is not completely new; BottomlineIT’s sister blog, DancingDinosaur, addressed it back in October 2009.

Over the last several decades people would laugh if a CIO suggested a mainframe for data analysis beyond the standard canned system reporting.  For ad-hoc querying, multi-dimensional analysis, and data visualization you needed distributed systems running a variety of specialized GUI tools. Still, the resulting queries could take days to run.

In a recent analyst briefing, Alan Meyer, IBM’s senior manager for Data Warehousing on the System z, built the case for a different style of data analysis on the zEnterprise. He drew a picture of companies needing to make better informed decisions at the point of engagement while applications and business users increasingly are demanding the latest data faster than ever. At the same time there is no letup in pressure to lower cost, reduce complexity, and improve efficiency.

So what’s stopping companies from doing near real-time analytics and the big data thing? The culprits, according to Meyer, are duplicate data infrastructures, the complexity of integrating multiple IT environments, inconsistent security, and insufficient processing power, especially when having to handle large volumes of data fast. The old approach clearly is too slow and costly.

The zEnterprise, it turns out, is an ideal vehicle for today’s demanding analytics.  It is architected for on-demand processing through pre-installed capacity paid for only when activated and allowing the addition of processors, disk, and memory without taking the system offline.  Virtualized top to bottom, zEnterprise delivers the desired isolation while prioritization controls let you identify the most critical queries and workloads. Its industry-leading processors ensure that the most complex queries run fast, and low latency enables near real-time analysis. Finally, multiple deployment options means you can start with a low-end z114 and grow through a fully configured z196 combined with a zBX loaded with blades.

Last October the company unveiled the IBM DB2 Analytics Accelerator (IDAA), a revamped version on the Smart Analytics Optimizer available only for the zEnterprise, along with a host of other analytics tools under the smarter computing banner. But the IDAA is IBM’s analytics crown jewel. The IDAA incorporates Netezza, an analytics engine that speeds complex analytics through in-memory processing combined with a highly intelligent query optimizer. When run in conjunction with DB2 also residing on the zEnterprise, the results can be astonishing, with queries that normally require a few hours completed in just a few seconds, 1000 times faster according to some early users.

Netezza, when deployed as an appliance, streamlines database performance through hardware acceleration and optimization for deep analytics, multifaceted reporting, and complex queries. When embedded in the zEnterprise, it delivers the same kind of performance for mixed workloads—operational transaction systems, data warehouse, operational data stores, and consolidated data marts—but with the z’s extremely high availability, security, and recoverability. As a natural extension of the zEnterprise, where the data already resides in DB2 and the OLTP systems, IDAA is able to deliver pervasive analytics across the organization while further speeding performance and ease of deployment and administration.

Of course, IT has other options: EMC now offers its Greenplum data analytics appliance  and Oracle just released its Big Data Appliance. Neither requires a mainframe. But when the price for the latest general purpose mainframe starts at $75,000 and can do much more than any appliance, maybe it’s time to consider one. The value of the predictive, near real-time business analytics alone could justify it.

, , , , , ,

Leave a comment

Follow

Get every new post delivered to your Inbox.

Join 408 other followers