Posts Tagged mobile

New Enterprise Systems Maturity Model

Does your shop use maturity models to measure where you stand and where you should be going compared with industry trends and directions? Savvy IT managers often would use such a model to pinpoint where their organization stood on a particular issue  as part of their pitch for an increased budget to hire more people or acquire newer, faster, greater IT capabilities.

Today maturity models are still around but they are more specialized now. There are, for instance, maturity models for security and IT management. Don’t be surprised to see maturity models coming out for cloud or mobile computing if they are not already here.

Earlier this year, Compuware introduced a new maturity model for the enterprise data center.  You can access it here. Compuware describes the new maturity model as one that helps organizations assess and improve the processes for managing application performance and costs as distributed and mainframe systems converge.

Why now? The data center and even the mainframe have been changing fast with the advent of cloud computing, mobile, and big data/analytics. Did your enterprise data center team ever think they would be processing transactions from mobile phones or running analytic applications against unstructured social media data? Did they ever imagine they would be handling compound workloads across mainframes and multiple distributed systems running both Windows and Linux?  Welcome to the new enterprise data center normal.

Maybe the first difference you’ll notice in the new maturity model are the new types of people populating the enterprise data center. Now you need to accommodate distributed and open systems along with the traditional mainframe environment. It requires that you bring together completely different teams and integrate them. Throw in mobile, big data, analytics, and social and you have a vastly different reality than you had even a year ago.  And with that comes the need to bridge the gap that has long existed between the enterprise (mainframe) and distributed data center teams. This is a cultural divide that will have to be navigated, and the new enterprise IT maturity model can help.

The new data center normal, however, hasn’t changed data center economics, except maybe to exacerbate the situation. The data center has always been under pressure to rein in costs and use resources, both CPUs and MIPS, efficiently.  Those pressures are still there but only more so because the business is relying on the data center more than ever before as IT becomes increasingly central to the organization’s mission.

Similarly, the demand for high levels of quality of service (QoS) not only continues unabated but is expanding. The demand for enterprise-class QoS now extends to compound workloads that cross mainframe and distributed environments, leaving the data center scrambling to meet new end user experience (EUE) expectations even as it pieces together distributed system QoS work-arounds. The new enterprise IT maturity model will help blend these two worlds and address the more expansive role IT is playing today.

To do this the model combines distributed open systems environments with the mainframe and recognizes different workloads, approaches, processes, and tooling. It defines five levels of maturity: 1) ad hoc, 2) technology-centric, 3) internal services-centric, 4) external services-centric, and 5) business-revenue centric.

Organizations at the ad hoc level, for example, primarily use the enterprise systems to run core systems and may still employ a green screen approach to application development. At the technology-centric level, there’s an emphasis on infrastructure monitoring to support increasing volumes, higher capacity, complex workload and transaction processing along with greater MIPS usage. As organizations progress from internal services-focused to external services-focused, mainframe and distributed systems converge and EUE and external SLAs assume a greater priority.

Finally, at the fifth or business centric level, the emphasis shifts to business transaction monitoring where business needs and EUE are addressed through interoperability of the distributed systems and mainframes with mobile and cloud systems. Here technologies provide real-time transaction visibility across the whole delivery chain, and IT is viewed as a revenue generator. That’s the new enterprise data center normal.

In short, the new enterprise maturity model requires enterprise and distributing computing come together and all staff work together closely; that proprietary systems and open systems interoperate seamlessly. And there is no time for delay. Already, DevOps, machine-to-machine computing (the Internet of Things), and other IT strategies, descendents of agile computing, are gaining traction while smart mobile technologies drive the next wave of enterprise computing.

Advertisements

, , , , , , , , , , , , , , , , , ,

Leave a comment

Fueled by SMAC Tech M&A Activity to Heat Up

Corporate professional services firm BDO USA  polled approximately 100 executives of U.S. tech outfits for its 2014 Technology Outlook Survey and found them firm in the belief that mergers and acquisitions in tech would either stay at the same rate (40%) or increase over last year (43%). And this isn’t a recent phenomenon.

M&A has been widely adopted across a range of technology segments as not only the vehicle to drive growth but, more importantly, to remain at the leading edge in a rapidly changing business and technology environment that is being spurred by cloud and mobile computing. And fueling this M&A wave is SMAC (Social, Mobile, Analytics, Cloud).

SMAC appears to be triggering a scramble among large, established blue chip companies like IBM, EMC, HP, Oracle, and more to acquire almost any promising upstart out there. Their fear: becoming irrelevant, especially among the young, most highly sought demographics.  SMAC has become the code word (code acronym, anyway) for the future.

EMC, for example, has evolved from a leading storage infrastructure player to a broad-based technology giant driven by 70 acquisitions over the past 10 years. Since this past August IBM has been involved in a variety of acquisitions amounting to billions of dollars. These acquisitions touch on everything from mobile networks for big data analytics and mobile device management to cloud services integration.

Google, however, probably should be considered the poster child for technology M&A. According to published reports, Google has been acquiring, on average, more than one company per week since 2010. The giant search engine and services company’s biggest acquisition to date has been the purchase of Motorola Mobility, a mobile device (hardware) manufacturing company, for $12.5 billion. The company also purchased an Israeli startup Waze  in June 2013 for almost $1 billion.  Waze is a GPS-based application for mobile phones and has brought Google a strong position in the mobile phone navigation business, even besting Apple’s iPhone for navigation.

Top management has embraced SMAC-driven M&A as the fastest, easiest, and cheapest way to achieve strategic advantage through new capabilities and the talent that developed those capabilities. Sure, the companies could recruit and build those capabilities on their own but it could take years to bring a given feature to market that way and by then, in today’s fast moving competitive markets, the company would be doomed to forever playing catch up.

Even with the billion-dollar and multi-billion dollar price tags some of these upstarts are commanding strategic acquisitions like Waze, IBM’s SoftLayer, or EMC’s XtremeIO have the potential to be game changers. That’s the hope, of course. But it can be risky, although risk can be managed.

And the best way to manage SMAC merger risk is to have a flexible IT platform that can quickly absorb those acquisitions and integrate and share the information and, of course, a coherent strategy for leveraging the new acquisitions. What you need to avoid is ending up with a bunch of SMAC piece parts that don’t fit together.

, , , , , , , , , , , , , , , , , ,

Leave a comment

Fresh Toys Help IT Open New Business Opportunities in 2014

Much of what to expect for IT for 2014 you have already glimpsed right here in BottomlineIT although some will be startlingly new. The new stuff will address opportunities in many cases that businesses are just starting to consider. Some of these, like 3D-printing or e-wallet, have the potential to radically change the way business operates.

Let’s start with what you already know: 2014 will be cloud everything; which is being steadily absorbed into business DNA as it evolves into the predominant way companies go to market, relate with their customers and partners, find employees, and deliver increasing aspects of their products as online services.

Also expect the continued hyping of big data, especially the unstructured data found everywhere, and analytics, which is necessary to make sense of the data. In 2014 analytics will be augmented by real-time analytics and predictive analytics, both of which can indeed deliver measurable business value.

In 2014 everything will be virtualized. In the process it will become defined by software. That means it will be programmable, allowing you to change its capabilities almost at will. Virtualized, software-defined capabilities will be in the products you acquire and the appliances you buy. You next car will be software-defined and Internet (cloud) connected. Your video-enabled car will be able to park in a tighter space than you can park it yourself.

Mobile, in the form of smartphones and tablet devices, will be the devices of choice for more and more people worldwide. Your mobile device will increasingly handle your communications, shopping, purchasing, socializing, entertainment, and work tasks even as it take over more of the functions of your wallet. Eventually the e-wallet will contain your identification, memberships, subscriptions, credit and debit cards as security gets bolstered,

On to the completely new: business drones are coming; mainly in the form of smart, software defined and programmable devices that can do errands. Basically they are taking robotics to a new level. Amazon hopes to use them to deliver items to your doorstep within hours of your purchase. What might your business do with a capability like this?

3D-printing is BottomlineIT’s favorite. Where the Internet disintermediated much of the traditional supply chain and distribution channel, 3D-printing can disintermdiate manufacturers by producing the physical product at your desk. Now software-defined, customizable mass products can be cost-effectively manufactured at scale for a market of just one. With 3D-printing you can deliver a customizable version of your widget to a customer as readily as you send a fax. Can you make some money with that capability?

Finally, smart, wearable, cloud-connected computers in the form of wrist watches (remember old Dick Tracy comics) and eye wear. Google Glass will become increasingly commonplace. Exactly what will be the business value of Google Glass remains unclear. Right now you buy it for the extreme cool factor.

So expect new IT goodies around the digital Xmas tree starting to arrive this year but in quantity by the end of 2014. Some may be a bust; others may be late in coming. As CIO, your job is to figure out which of these help can you meet your organization’s business goals. Best wishes for 2014.

, , , , , , , , , , , , ,

Leave a comment

Five Reasons Businesses Use the Cloud that IT Can Live With

By 2016, cloud will matter more to business leaders than to IT, according to the IBM Center for Applied Insights. In fact, cloud’s strategic importance to business leaders is poised to double from 34% to 72%. That’s more than their IT counterparts where only 58% acknowledge its strategic importance.

This shouldn’t be surprising. Once business leaders got comfortable with the security of the cloud it was just a matter of figuring out how to use it to lower costs or, better yet, generate more revenue faster. IT, on the other hand, recognized the cloud early on as a new form of IT outsourcing and saw it as a direct threat, which understandably dampened their enthusiasm.

IBM’s research—involving more than 800 cloud decision makers and users—painted a more business-friendly picture that showed the cloud able to deliver more than just efficiency, especially IT efficiency. Pacesetting organizations, according to IBM, are using cloud to gain competitive advantage through strategic business reinvention, better decision making, and deeper collaboration. And now the business results to prove it are starting to roll in. You can access the study here.

IT, however, needn’t worry about being displaced by the cloud. Business managers still lack the technical perspective to evaluate and operationally manage cloud providers. In addition, there will always be certain functions that best remain on premise. These range from conformance with compliance mandates to issues with cloud latency to the need to maintain multiple sources of IT proficiency and capability to ensure business continuance. Finally, there is the need to assemble, maintain, and manage an entire ecosystem of cloud providers (IaaS, PaaS, SaaS, and others) and services like content distribution, network acceleration, and more.  So, rest assured; if you know your stuff, do it well, and don’t get greedy the cloud is no threat.

From the study came five business reasons to use the cloud:

1)      Better insight and visibility—this is the analytics story; 54% use analytics to derive insights from big data, 59% use it to share data, and 59% intend to use cloud to access and manage big data in the future

2)      Easy collaboration—cloud facilitates and expedites cross-functional collaboration, which drives innovation and boosts productivity

3)      Support for a variety of business needs by forging a tighter link between business outcomes and technology in areas like messaging, storage, and office productivity suites; you should also add compute-business agility

4)      Rapid development of new products and services—with  52% using the cloud to innovate products and services fast and 24% using it to offer additional product and services; anything you can digitize, anything with an information component can be marketed, sold, and delivered via the cloud

5)      Proven results –25% reported a reduction in IT costs due to the cloud, 53% saw an increase in efficiency, and 49% saw improvement in employee mobility.

This last point about mobility is particularly important. With the advent of the cloud geography is no longer a constraining business factor. You can hire people anywhere and have them work anywhere. You can service customers anywhere. You can source almost any goods and services from anywhere. And IT can locate data centers anywhere too.

Yes, there are things for which direct, physical interaction is preferred. Despite the advances in telemedicine, most people still prefer an actual visit to the doctor; that is unless a doctor simply is not accessible. Or take the great strides being made in online learning; in a generation or two the traditional ivy covered college campus may be superfluous except, maybe, to host pep rallies and football games. But even if the ivy halls aren’t needed, the demand for the IT capabilities that make learning possible and enable colleges to function will only increase.

As BottomelineIT has noted many times, the cloud is just one component of your organization’s overall IT and business strategy.  Use it where it makes sense and when it makes sense, but be prepared to alter your use of the cloud as changing conditions dictate. Change is one of the best things at which the cloud is best.

, , , , , , , , ,

Leave a comment

Sorting Out the Data Analytics IT Options

An article from McKinsey & Company, a leading management consulting and research firm, declares: “By 2018, the United States alone could face a shortage of 140,000 to 190,000 people with deep [data] analytical skills as well as [a shortage of] 1.5 million managers and analysts with the know-how to use the analysis of big data to make effective decisions. Might this be your shop?

Many companies today are scrambling to assemble an IT data analytics infrastructure to support data analytics. But before they can even begin they have to figure out what kind of analytics the organization will want to deploy. Big data is just one of many possibilities and the infrastructure that works for some types of data analytics won’t work for others.

Just off the top of the head this blogger can list a dozen types of data analytics in play: OLAP, business intelligence (BI), business analytics, predictive analytics, real-time analytics, big data analytics, social analytics, web analytics, click stream analytics, mobile analytics, brand/reputation analysis, and competitive intelligence. You’ve probably have a few of these already.

As advanced analytics pick up momentum data center managers will be left trying to cobble together an appropriate IT infrastructure for whatever flavors of analytics the organization intends to pursue. Unless you have a very generous budget you can’t do it all.

For example, big data is unbelievably hot right now so maybe it makes sense to build an infrastructure to support big data analytics. But predictive analytics, the up and coming superstar of business analytics, is an equally hot capability due to its ability to counter fraud or boost online conversion immediately, while the criminal or customer is still online.

BI, however, has been the analytics workhorse for many organizations for a decade or more, along with OLAP, and companies already have a working infrastructure for that.  It consists of a data warehouse with relational databases and common query, reporting, and cubing tools. The IT infrastructure, for the most part, already is in place and working.

On the other hand, if top management now wants big data analytics or real time data analytics or predictive analytics you may need a different information architecture and design, different tools, and possibly even different underlying technologies. Big data, for example, relies on Hadoop, a batch process that does not make use of SQL. (Vendors are making a valiant effort to graft a SQL-like interface onto Hadoop with varying degrees of success.)

Real-time analytics is just that—real-time—basically the opposite of Hadoop. It works best using in-memory data and logic processing to speed the results of analytic queries in seconds or even microseconds. Data will be stored on flash storage or in large amounts of cache memory as close to the processing as it can get.

A data information architecture that is optimized for big data’s unstructured batch data cannot also be used for real time analytics.  And the traditional BI data warehouse infrastructure probably isn’t optimized for either of them.  The solution calls for extending your existing data management infrastructure to encompass the latest analytics management wants or designing and building yet another IT data infrastructure.  Over the past year, however, the cloud has emerged as another place where organizations can run analytics, provided the providers can overcome the latencies inherent in the cloud.

, , , , , , , , , , , ,

Leave a comment

IT Chaos Means Opportunity for the CIO

Hurricanes, hybrid superstorms, earthquake-tsunami combinations, extreme heat, heavy snow in April are just a few signs of chaos. For IT professionals specifically, chaos today comes from the proliferation of smartphones and BYOD or the deluge of data under the banner of big data. A sudden shift to the deployment of massive numbers of ARM processors or extreme virtualization might trigger platform chaos.  A shortage of sufficient energy can lead to another form of chaos. Think of it this way: chaos has become the new normal.

Big consulting firms have latched onto the idea of chaos. Deloitte looks to enterprise data management to create order out of chaos. At Capgemini, the need of organizations to increasingly deal with unstructured processes that ordinary Business Process Management (BPM) solutions were not designed to cope with can be enough to lead to chaos. Their solution: developing case management around a BPM solution – preferably in conjunction with an Enterprise Content Management system – solves many of the problems

Eric Berridge, co-founder of Bluewolf Group, a leading consulting firm specializing in Salesforce.com implementations, put it best when he wrote in a recent blog that CIOs must learn to harness chaos for a very simple reason: business is becoming more chaotic. Globalization and technology, which have turned commerce on its head over the past 20 years, promise an even more dizzying rate of change in the next decade.

Berridge’s solution draws on the superhero metaphor. The CIO has to become Captain Chaos, the one able to overcome a seemingly insurmountable level of disarray to deliver the right value at the right time. And you do that my following a few straightforward tips:

First, don’t build stuff you don’t absolutely have to build. You want your organization to travel as light as possible. If you build systems you are stuck with them. Instead, you want to be able to change systems as fast as the business changes in response to whatever chaos is swirling at the moment. That means you need to aim for an agile IT infrastructure, probably one that can take tap a variety of cloud services and turn them on and off as needed.

Then, recognize the consumerization of IT and the chaos it has sparked.  This is not something to be resisted but embraced and facilitated in ways that give you and your organization the measure of control you need. Figure out how to take advantage of the consumerization of IT through responsive policies, elastic infrastructure, and flexible security capabilities.

Next, encourage the organization’s R&D and product development groups to also adopt agile methods and approaches to innovation, especially through social media and other forms of collaboration. Even encourage them to go a step further by reaching out to customers to participate.  Your role as CIO at this point is to facilitate interaction among the parties who can create successful innovation.

Finally, layer on enough just-in-time governance to enable the organization to manage the collaboration and interactivity. The goal is to rein in chaos and put it to work. To do that you need to help set priorities, define objectives, execute plans, and enforce flexible and agile policies—all the things that any successful business needs to do but do so in the context of a chaotic world that is changing in ways you and top management can’t predict.

As CIO this puts big demands on you too. To start, you have to keep your finger on the pulse of what is happening with the world at large, in business and with technology. That means you need to figuratively identify and place sensors and monitors that can tip you off as things change. You also can’t master every technology. Instead you need to identify an ever-changing stable of technology masters you can call on as needed and familiarize yourself with the vast amount of resources available in the cloud.

In the end, these last two points—a stable of technology masters you can call upon and deep familiarity with cloud resources—will enable you to deliver the most value to your organization despite the chaos of the moment. At that point you truly become Captain Chaos, the one your organization counts on to deal with ever changing chaos.

, , , , , , , , , , , , ,

Leave a comment

Mainframe Workload Economics

IBM never claims that every workload is suitable for the zEnterprise. The company prefers to talk about platform issues in terms of fit-for-purpose or tuned-to-the-task. With the advent of hybrid computing, the low cost z114, and now the expected low cost version of the zEC12 later this year, however, you could make a case for any workload that benefits from the reliability, security, and efficiency of the zEnterprise mainframe is fair game.

John Shedletsky, VP, IBM Competitive Project Office, did not try to make that case. To the contrary, earlier this week he presented the business case for five workloads that are optimum economically and technically on the zEnterprise.  They are:  transaction processing, critical data workloads, batch processing, co-located business analytics, and consolidation-on-one-platform. None of these should be a surprise; possibly with the exception of analytics and consolidated platform they represent traditional mainframe workloads.  BottomlineIT covered Shedletsky’s mainframe cost/workload analysis last year here.

This comes at a time when IBM has started making a lot of noise about new and different workloads on the zEnterprise. Doug Balog, head of IBM System z mainframe group, for example, was quoted widely in the press earlier this month talking about bringing mobile computing workloads to the z. Says Balog in Midsize Insider: “I see there’s a trend in the market we haven’t directly connected to z yet, and that’s this mobile-social platform.”

Actually, this isn’t even all that new either. BottomlineIT’s sister blog, DancingDinosaur, was writing about organizations using SOA to connect CICS apps running on the z to users with mobile devices a few years ago here.

What Shedletsky really demonstrated this week was the cost-efficiency of the zEC12.  In one example he compared a single workload, app production/dev/test running on a 16x, 32-way HP Superdome and an 8x, 48-way Superdome with a zEC12 41-way. The zEC12 delivered the best price/performance by far, $111 million (5yr TCA) for the zEC12 vs. $176 million (5yr TCA) for the two Superdomes.

When running Linux on z workloads with the zEC12 compared to 3 Oracle database workloads (Oracle Enterprise Edition, Oracle RAC, 4 server nodes per cluster) supporting 18K transactions/sec.  running on 12 HP DL580 servers (192 cores) the HP system priced out at $13.2 million (3yr TCA) compared to a zEC12 running 3 Oracle RAC clusters (4 nodes per cluster, each as a Linux guest) with 27 IFLs that priced out at $5.7 million (3yr TCA). The zEC12 came in at less than half the cost.

With analytics such a hot topic these days Shedletsky also presented a comparison of the zEnterprise Analytics System 9700 (zEC12, DB2 v10, z/OS, 1 general processor, 1 zIIP) and an IDAA with a current Teradata machine. The result: the Teradata cost $330K/queries per hour compared to $10K/queries per hour.  Workload time for the Teradata was 1,591 seconds for 9.05 queries per hour compared to 60.98 seconds and 236 queries per hour on the zEC12. The Teradata total cost was $2.9 million compared to $2.3 million for the zEC12.

None of these are what you would consider new workloads, and Shedletsky has yet to apply his cost analysis to mobile or social business workloads. However, the results shouldn’t be much different. Mobile applications, particularly mobile banking and other mobile transaction-oriented applications, will play right into the zEC12 strengths, especially when they are accessing CICS on the back end.

While transaction processing, critical data workloads, batch processing, co-located business analytics, and consolidation-on-one-platform remain the sweet spot for the zEC12, Balog can continue to make his case for mobile and social business on the z. Maybe in the next set of Shedletsky comparative analyses we’ll see some of those workloads come up.

For social business the use cases aren’t quite clear yet. One use case that is emerging, however, is social business big data analytics. Now you can apply the zEC12 to the analytics processing part at least and the efficiencies should be similar.

, , , , , , , , , , , , ,

Leave a comment