Internet of Things Delivers Business Value Now

The Internet of Things (IoT) and the economic ecosystem surrounding it are expected to become an $8.9 trillion market in 2020, according to IDC in published reports. And IDC may be conservative. Cisco estimates that IoT will drive economic value (increased revenues and reduced costs) to $14.4 trillion by then.

Business, however, doesn’t have to wait nearly that long. The IoT reportedly enabled global private-sector businesses to generate at least $613 billion in global profits in 2013, according to a study released by Cisco in June 2013.

With big numbers like that expect every enterprise IT systems vendor to jump on the bandwagon. Not only Cisco but IBM, Oracle, HP, and more. BottomlineIT last covered IoT six months ago here. Enterprise IT should begin following IoT closely. It promises to put IT at the center of a substantial future revenue stream.

The Cisco study reinforced exactly this point: Corporations could nearly double profits through greater adoption of business practices, customer approaches, and technologies that leverage IoT. Furthermore, it estimated that an additional $544 billion could be realized if companies simply adjusted their strategies to better leverage it. That’s where the CIO and IT come in.

The IoT isn’t exactly new. It has been around for years, mainly as private networks of sensors or machines wired to feed very specific data to backend systems for capture and analysis. More recently these systems went under the name of machine-to-machine (M2M) systems. Today’s IoT consists of a combination of sensors, technology, and networking all coming together to allow systems, buildings, infrastructure, and other resources capture and swap information.  

Bill Bien, Partner, Waterstone Management Group LLC, Chicago, suggests that technology executives think of IoT as “the next generation of the Internet, whereby objects interact, potentially independently, with each other and with their environment. It is the combination of distributed information processing, pervasive wireless networking, and automatic identification deployed inexpensively and widely.”  This will produce, in turn, “profound change that enables the extraction of value and analytics from distributed systems across enterprise and industrial value chains,” he adds.

Given the above, Bien continues, it is imperative for technology executives to understand the impact of IoT from two perspectives:

 

  1. Monetization opportunities offered by IoT in target markets (external focus)
  2. Optimization by improving the efficiency of their own supply chains and distribution channels (internal focus)

To put the conceptual IoT in practice, Bien’s organization analyzed 40 IoT use cases according to the type of distribution, supply chain, and customer information they generate and classified current and future deployments into four groups: control hubs, value chain transformation, monitoring & assessment, and decision support. In addition, to further frame the value creation opportunities offered by IoT, a use case-driven approach is needed to understand and prioritize the likely business impact of these use cases, he recommended. See the following chart, Categorization of Internet of Things Use Cases.

 david IoT Use Case Categorization

As a CIO you can tap the IoT to monitor and assess your business’s infrastructure and whatever other objects populate the enterprise for starters. From there, expand your thinking to the organization’s supply chain and any other value networks your organization maintains. Your initial emphasis will be on the left column of the use cases chart above. Eventually, you will want to add the right column where the bigger potential payoffs reside.

Finally, as your organization embeds intelligence into the products it sells you should consider creating an IoT around your own products to understand how they are used in order to create better designs, understand your customers more deeply, and improve operational efficiencies. Then, beyond just collecting deployment and usage data from your product IoT, you can leverage product usage and performance data to begin offering value added services to both differentiate your products and potentially generate added revenue.  The ultimate payoff will vary by the industry and size of your distribution and supply chain networks, but by all accounts it will be significant.

 

 

Leave a comment

The Data Center’s Hybrid Cloud Future

Nearly half of large enterprises will have hybrid cloud deployments by the end of 2017. Hybrid cloud computing is at the same place today that private cloud was three years ago; actual deployments are low, but aspirations are high, says Gartner in a published news note from Oct. 2013.

Almost every organization today utilizes some form of cloud computing, usually public clouds.  In its State of the Cloud survey RightScale found 94% of organizations are running applications or experimenting with infrastructure-as-a-service, 87% of organizations are using public cloud, and 74% of enterprises have a hybrid cloud strategy and more than half of those are already using both public and private cloud. RightScale estimates may be a bit more generous than Gartner but both come to the same conclusion in the end: Hybrid cloud is the approach of choice.

Executive management, however, prefers private clouds for the control and security they promise. That the actual control and security may not be much better than what the organization could achieve in the public cloud if it were rolled out properly, but executives don’t understand that. So private clouds are management’s darling for now.

Private clouds, however, fail to deliver the key benefits of cloud computing—cost efficiency and business agility. The organization still has to invest in all the IT resources, capacity, and capabilities of the private cloud. Unlike the public cloud, these are not shared resources. They may repurpose some of their existing IT investment for the private cloud but they invariably will again have to acquire additional IT resources and capacity as before. And, the demand for resources may increase as business units come to like IT-as-a-Service (ITaaS), the rationale for private clouds in the first place.

As for business agility with private clouds—forget it. If new capabilities are required to meet some new business requirement, the organization will have to build or acquire that capability as it did before. The backlogs for developing new capabilities do not magically go away with ITaaS and private clouds. If business agility requires the business to pivot on a moment’s notice to meet new challenges and opportunities there is only one way the private cloud can do it–develop it in-house, the old fashioned way.

Hybrid clouds provide the answer. Gartner, Inc. defines a hybrid cloud as a cloud computing service that is composed of some combination of private, public and community cloud services from different service providers. In the hybrid cloud scenario, the company can rely on its private cloud and happily cruise along until it needs a capability or resource it can’t deliver. Then the company reaches out through the hybrid cloud to the public cloud for the required capability. Rather than build it, the organization basically rents the capability, paying only for what it uses when it uses it. This is ideal when the organization needs to temporarily augment resources, capacity, or capabilities to meet an unanticipated need.

Hybrid clouds, unfortunately, don’t just pop up overnight. First you need to lay the groundwork for your hybrid cloud. That entails identifying the specific cloud resources and services in advance, making the necessary financial arrangements with appropriate public cloud vendors, and establishing and testing the connections. Also, check with your auditors who will want to be assured about security and governance and similar details.

While you are at it, make sure your networking and security teams are on board. Ports will need to be opened; the firewall gods will need to be appeased. You also will need to think about how these new capabilities and services will integrate with the capabilities and services you already have. This isn’t necessarily a major undertaking as IT projects go but will take some time—days or, more likely, a few weeks—to get the approvals, assemble all the pieces, and get them configured and tested and ready to deploy.

As RightScale notes: although the use of cloud is a given, enterprises often have different strategies that involve varying combinations of public, private, and hybrid cloud infrastructure. For most, however, the hybrid cloud provides the best of all cloud worlds, especially in terms of cost and agility. You can run ITaaS from your private cloud and pass through your hybrid cloud whenever you need public cloud resources you don’t have in house.  Just make sure you set it up in advance so it is ready to go whenyou need it.

, , , , , , , , , ,

Leave a comment

Best TCO—System z vs. x86 vs. Public Cloud

IBM recently analyzed various likely customer workload scenarios and found that the System z as an enterprise Linux server could consistently beat x86 machines and even public cloud providers like AWS in terms of TCO.  The analysis was reasonably evenhanded although, like automobile mileage ratings, your actual results may vary.

This blogger has long contended that the z Enterprise Linux Server acquired under the deeply discounted IBM System z Solution Edition program could beat comparable x86 systems not only in terms of TCO but even TCA. Algar, a Brazilian telecom, acquired its initial zEnterprise Linux server to consolidate a slew of x86 systems and lay a foundation for scalable growth. It reports cutting data center costs by 70%. Nationwide Insurance, no newcomer to mainframe computing, used the zEnterprise to consolidate Linux servers, achieving $46 million in savings.

The point: the latest IBM TCO analyses confirm what IBM has been saying. TCO advantage, IBM found, switches to the z Enterprise Linux Server at around 200 virtual servers compared to the public cloud and a bit more VMs compared to x86 machines. View the IBM z TCO presentation here.

IBM further advanced its cause in the TCO/TCA battle with the recent introduction of the IBM Enterprise Cloud System. This is a factory-built and integrated system—processor, memory, network, IFLs, virtualization management, cloud management, hypervisor, disk orchestration, Linux OS—priced (discounted) as a single solution. IBM promises to deliver it in 45 days and have it production ready within hours of hitting the organization’s loading dock. Of course, it comes with the scalability, availability, security, manageability, etc. long associated with the z, and IBM reports it can scale to 6000 VMs. Not sure how this compares in price to a Solution Edition Enterprise Linux Server.

The IBM TCO analysis compared the public cloud, x86 cloud, and the Enterprise Cloud System in terms power and space, labor, software/middleware, and hardware costs when running 48 diverse (a range of low, medium, and high I/O) workloads. In general it found an advantage for the z Enterprise Cloud System of 34-73%.  The z cost considerably more in terms of hardware but it more than made up for it in terms of software, labor, and power. Overall, the TCO examined more than 30 cost variables, ranging from blade/IFL/memory/storage amounts to hypervisor/cloud management/middleware maintenance.

In terms of hardware, the z included the Enterprise Linux Server, storage, z/VM, and IBM Wave for z/VM. Software included WebSphere Application Server middleware, Cloud Management Suite for z, and Tivoli for z/VM. The x86 cloud included HP hardware with a hypervisor, WebSphere Application Server, SmartCloud Orchestrator, SmartCloud Monitoring, and Tivoli Storage Manager EE. Both analyses included labor to manage both hardware and VMs, power and space costs, and SUSE Linux.

The public cloud assumptions were a little different. Each workload was deployed as a separate instance. The pricing model was for reserved instances. Hardware costs were based on instances in east US region with SUSE, EBS volume, data in/out, support (enterprise), free and reserved tier discounts applied. Software costs included WebSphere Application Server ND (middleware) costs for instances. A labor cost was included for managing instance.

When IBM applied its analysis to 398 I/O-diverse workloads the results were similar, 49-75% lower cost with the Cloud System on z. Again, z hardware was considerably more costly than either x86 or the public cloud. But z software and labor was far less than the others. In terms of 3-year TCO, the cloud was the highest at $37 M, x86 came in at $18.3 M, and the Cloud on z cost $9.4 M. With 48 workloads, the z again came in with lowest TCO at $1 M compared to $1.6 M for x86 systems, and $3.9 M for the public cloud.

IBM tried to keep the assumptions equivalent across the platforms. If you make different software or middleware choices or a different mix of high-mid-low I/O workloads your results will be different but the rankings probably won’t change all that much.

Also, there still is time to register for IBM Edge2014 in Las Vegas. This blogger will be there hanging around the bloggers lounge when not attending sessions. Please join me there.

Follow Alan Radding/BottomlineIT on Twitter: @mainframeblog

, , , , , , , , , , , , , , , , , , , , , , , , ,

Leave a comment

New Enterprise Systems Maturity Model

Does your shop use maturity models to measure where you stand and where you should be going compared with industry trends and directions? Savvy IT managers often would use such a model to pinpoint where their organization stood on a particular issue  as part of their pitch for an increased budget to hire more people or acquire newer, faster, greater IT capabilities.

Today maturity models are still around but they are more specialized now. There are, for instance, maturity models for security and IT management. Don’t be surprised to see maturity models coming out for cloud or mobile computing if they are not already here.

Earlier this year, Compuware introduced a new maturity model for the enterprise data center.  You can access it here. Compuware describes the new maturity model as one that helps organizations assess and improve the processes for managing application performance and costs as distributed and mainframe systems converge.

Why now? The data center and even the mainframe have been changing fast with the advent of cloud computing, mobile, and big data/analytics. Did your enterprise data center team ever think they would be processing transactions from mobile phones or running analytic applications against unstructured social media data? Did they ever imagine they would be handling compound workloads across mainframes and multiple distributed systems running both Windows and Linux?  Welcome to the new enterprise data center normal.

Maybe the first difference you’ll notice in the new maturity model are the new types of people populating the enterprise data center. Now you need to accommodate distributed and open systems along with the traditional mainframe environment. It requires that you bring together completely different teams and integrate them. Throw in mobile, big data, analytics, and social and you have a vastly different reality than you had even a year ago.  And with that comes the need to bridge the gap that has long existed between the enterprise (mainframe) and distributed data center teams. This is a cultural divide that will have to be navigated, and the new enterprise IT maturity model can help.

The new data center normal, however, hasn’t changed data center economics, except maybe to exacerbate the situation. The data center has always been under pressure to rein in costs and use resources, both CPUs and MIPS, efficiently.  Those pressures are still there but only more so because the business is relying on the data center more than ever before as IT becomes increasingly central to the organization’s mission.

Similarly, the demand for high levels of quality of service (QoS) not only continues unabated but is expanding. The demand for enterprise-class QoS now extends to compound workloads that cross mainframe and distributed environments, leaving the data center scrambling to meet new end user experience (EUE) expectations even as it pieces together distributed system QoS work-arounds. The new enterprise IT maturity model will help blend these two worlds and address the more expansive role IT is playing today.

To do this the model combines distributed open systems environments with the mainframe and recognizes different workloads, approaches, processes, and tooling. It defines five levels of maturity: 1) ad hoc, 2) technology-centric, 3) internal services-centric, 4) external services-centric, and 5) business-revenue centric.

Organizations at the ad hoc level, for example, primarily use the enterprise systems to run core systems and may still employ a green screen approach to application development. At the technology-centric level, there’s an emphasis on infrastructure monitoring to support increasing volumes, higher capacity, complex workload and transaction processing along with greater MIPS usage. As organizations progress from internal services-focused to external services-focused, mainframe and distributed systems converge and EUE and external SLAs assume a greater priority.

Finally, at the fifth or business centric level, the emphasis shifts to business transaction monitoring where business needs and EUE are addressed through interoperability of the distributed systems and mainframes with mobile and cloud systems. Here technologies provide real-time transaction visibility across the whole delivery chain, and IT is viewed as a revenue generator. That’s the new enterprise data center normal.

In short, the new enterprise maturity model requires enterprise and distributing computing come together and all staff work together closely; that proprietary systems and open systems interoperate seamlessly. And there is no time for delay. Already, DevOps, machine-to-machine computing (the Internet of Things), and other IT strategies, descendents of agile computing, are gaining traction while smart mobile technologies drive the next wave of enterprise computing.

, , , , , , , , , , , , , , , , , ,

Leave a comment

The Internet of Things Gains Traction

The Internet of Things (IoT) appears finally to be gaining real traction with both Gartner and IDC putting out reports on it. The opportunity, however, can best be understood in terms of vertical applications because the value of IoT is based on individual use cases across all verticals. “Successful sales and marketing efforts by vendors will be based on understanding the most lucrative verticals that offer current growth and future potential and then creating solutions for specific use cases that address industry-specific business processes,” said Scott Tiazkun, senior research analyst, IDC’s Global Technology and Industry Research Organization.” Similarly, enterprise IT needs to understand which vertical use cases will benefit first and most.

Tiazkun was referring to IDC’s latest Worldwide Internet of Things Spending by Vertical Market 2014-2017 Forecast.  To tap that market, IDC advises consultants to focus on the individual vertical opportunity that arises from IoT already in play.  Here is where a vertical business savvy IT exec can win. As IDC noted, realizing the existence of the vertical opportunity is the first step to understanding the impact and, therefore, to understanding an IoT market opportunity that exists – for enterprises and IT vendors and consultants.

The idea of IoT has been kicking around for years. BottomelineIT wrote about it early in 2011 here. It refers to the idea of embedding intelligence into things in the form of computer processors and making them IP addressable. Linking them together over a network gives you IoT.  The idea encompasses almost anything from the supply chain to consumer interests. Smart appliances, devices, and things of all sorts can participate in IoT.  RFID, all manner of sensors and monitors, big data, and real time analytics play into IoT.

In terms of dollars, IoT is huge. Specifically, IDC has found:

  • The technology and services revenue from the components, processes, and IT support for IoT to expand from $4.8 trillion in 2012 to $7.3 trillion by 2017 at an 8.8% compound annual growth rate (CAGR), with the greatest opportunity initially in the consumer, discrete manufacturing, and government vertical industries.
  • The IoT/machine-to-machine (M2M) market is growing quickly, but the development of this market will not be consistent across all vertical markets. Industries that already understand IoT will see the most immediate growth, such as industrial production/automotive, transportation, and energy/utilities. However, all verticals eventually will reflect great opportunity.
  • IoT is a derivative market containing many elements, including horizontal IT components as well as vertical and industry-specific IT elements. It is these vertical components where IT consultants and vendors will want to distinguish themselves to address industry-specific IoT needs.
  • IoT also opens IT consultants and vendors to the consumer market by providing business-to-business-to-consumer (B2B2C) services to connect and run homes and automobiles – all places that electronic devices increasingly  will have networking capability.

 Already, leading vendors are positioning themselves for the IoT market. To Oracle IoT brings tremendous promise to integrate every smart thing in this world.  Cisco, too, jumped early on IoT bandwagon dubbing it the Internet of Everything.

IBM gets almost cosmic about IoT, which it describes as the emergence of a kind of global data field. The planet itself—natural systems, human systems, physical objects—have always generated an enormous amount of data, but until recent decades, we weren’t able to hear, see, or capture it. Now we can because all of these things have been instrumented with microchips, UPC codes, and other technologies. And they’re all interconnected, so now we can actually access the data. Of course, this dovetails with IBM’s Smarter Planet marketing theme.

Enterprise IT needs to pay close attention to IoT too. First, it will change the dynamics of your network, affecting everything from network architecture to bandwidth to security. Second, once IT starts connecting the various pieces together, it opens interesting new possibilities for using IT to advance business objectives and even generate revenue. It can help you radically reshape the supply chain, the various sales channels, partner channels, and more. It presents another opportunity for IT to contribute to the business in substantive business terms.

IDC may have laid out the best roadmap to IoT for enterprise IT. According to IDC, the first step will be to understand the components of IoT/M2M IT ecosphere. Because this is a derivative market, there are many opportunities for vendors and consultants to offer pieces, product suites, and services that cover the needed IoT technology set. Just make sure this isn’t just about products. Make sure services, strategies, integration, and business execution are foremost. That’s how you’ll make it all pay off.

The promise of IoT seems open ended. Says Tiazkun: “The IoT solutions space will expand exponentially and will offer every business endless IoT-focused solutions. The initial strategy of enterprise IT should be to avoid choosing IoT-based solutions that will solve only immediate concerns and lack staying power. OK, you’re been alerted.

Follow BottomlineIT on Twitter: @mainframeblog

, , , , , , , , , , , ,

3 Comments

Fueled by SMAC Tech M&A Activity to Heat Up

Corporate professional services firm BDO USA  polled approximately 100 executives of U.S. tech outfits for its 2014 Technology Outlook Survey and found them firm in the belief that mergers and acquisitions in tech would either stay at the same rate (40%) or increase over last year (43%). And this isn’t a recent phenomenon.

M&A has been widely adopted across a range of technology segments as not only the vehicle to drive growth but, more importantly, to remain at the leading edge in a rapidly changing business and technology environment that is being spurred by cloud and mobile computing. And fueling this M&A wave is SMAC (Social, Mobile, Analytics, Cloud).

SMAC appears to be triggering a scramble among large, established blue chip companies like IBM, EMC, HP, Oracle, and more to acquire almost any promising upstart out there. Their fear: becoming irrelevant, especially among the young, most highly sought demographics.  SMAC has become the code word (code acronym, anyway) for the future.

EMC, for example, has evolved from a leading storage infrastructure player to a broad-based technology giant driven by 70 acquisitions over the past 10 years. Since this past August IBM has been involved in a variety of acquisitions amounting to billions of dollars. These acquisitions touch on everything from mobile networks for big data analytics and mobile device management to cloud services integration.

Google, however, probably should be considered the poster child for technology M&A. According to published reports, Google has been acquiring, on average, more than one company per week since 2010. The giant search engine and services company’s biggest acquisition to date has been the purchase of Motorola Mobility, a mobile device (hardware) manufacturing company, for $12.5 billion. The company also purchased an Israeli startup Waze  in June 2013 for almost $1 billion.  Waze is a GPS-based application for mobile phones and has brought Google a strong position in the mobile phone navigation business, even besting Apple’s iPhone for navigation.

Top management has embraced SMAC-driven M&A as the fastest, easiest, and cheapest way to achieve strategic advantage through new capabilities and the talent that developed those capabilities. Sure, the companies could recruit and build those capabilities on their own but it could take years to bring a given feature to market that way and by then, in today’s fast moving competitive markets, the company would be doomed to forever playing catch up.

Even with the billion-dollar and multi-billion dollar price tags some of these upstarts are commanding strategic acquisitions like Waze, IBM’s SoftLayer, or EMC’s XtremeIO have the potential to be game changers. That’s the hope, of course. But it can be risky, although risk can be managed.

And the best way to manage SMAC merger risk is to have a flexible IT platform that can quickly absorb those acquisitions and integrate and share the information and, of course, a coherent strategy for leveraging the new acquisitions. What you need to avoid is ending up with a bunch of SMAC piece parts that don’t fit together.

, , , , , , , , , , , , , , , , , ,

Leave a comment

Change-proof Your Organization

Many organizations are being whiplashed by IT infrastructure change—costly, disruptive never changes that are hindering IT and the organization.  You know the drivers: demand for cloud computing, mobile, social, big data, real-time analytics, and collaboration. Don’t forget to add soaring transaction volumes, escalating amounts of data, 24x7x365 processing, new types of data, proliferating forms of storage, incessant compliance mandates, and more keep driving change. And there is no letup in sight.

IBM started to articulate this in a blog post, Infrastructure Matters. IBM was focusing on cloud and data, but the issues go even further. It is really about change-proofing, not just IT but the business itself.

All of these trends put great pressure on the organization, which forces IT to repeatedly tweak the infrastructure or otherwise revamp systems. This is costly and disruptive not just to IT but to the organization.

In short, you need to change-proof your IT infrastructure and your organization.  And you have to do it economically and in a way you can efficiently sustain over time. The trick is to leverage some of the very same  technology trends creating change to design an IT infrastructure that can smoothly accommodate changes both known and unknown. Many of these we have discussed in BottomlineIT previously:

  • Cloud computing
  • Virtualization
  • Software defined everything
  • Open standards
  • Open APIs
  • Hybrid computing
  • Embedded intelligence

These technologies will allow you to change your infrastructure at will, changing your systems in any variety of ways, often with just a few clicks or tweaks to code.  In the process, you can eliminate vendor lock-in and obsolete, rigid hardware and software that has distorted your IT budget, constrained your options, and increased your risks.

Let’s start by looking at just the first three listed above. As noted above, all of these have been discussed in BottomlineIT and you can be sure they will come up again.

You probably are using aspects of cloud computing to one extent or another. There are numerous benefits to cloud computing but for the purposes of infrastructure change-proofing only three matter:  1) the ability to access IT resources on demand, 2) the ability to change and remove those resources as needed, and 3) flexible pricing models that eliminate the upfront capital investment in favor of paying for resources as you use them.

Yes, there are drawbacks to cloud computing. Security remains a concern although increasingly it is becoming just another manageable risk. Service delivery reliability remains a concern although this too is a manageable risk as organizations learn to work with multiple service providers and arrange for multiple links and access points to those providers.

Virtualization remains the foundational technology behind the cloud. Virtualization makes it possible to deploy multiple images of systems and applications quickly and easily as needed, often in response to widely varying levels of service demand.

Software defined everything also makes extensive use of virtualization. It inserts a virtualization layer between the applications and the underlying infrastructure hardware.  Through this layer the organization gains programmatic control of the software defined components. Most frequently we hear about software defined networks that you can control, manage, and reconfigure through software running on a console regardless of which networking equipment is in place.  Software defined storage gives you similar control over storage, again generally independent of the underlying storage array or device.

All these technologies exist today at different stages of maturity. Start planning how to use them to take control of IT infrastructure change. The world keeps changing and the IT infrastructures of many enterprises are groaning under the pressure. Change-proofing your IT infrastructure is your best chance of keeping up.

, , , , , , , , , , , , , , ,

Leave a comment

Follow

Get every new post delivered to your Inbox.

Join 456 other followers