Posts Tagged Intel

Where Have All the Enterprise IT Hardware Vendors Gone?

Remember that song asking where all the flowers had gone? In a few years you might be asking the same of many of today’s enterprise hardware vendors.  The answer is important as you plan your data center 3-5 years out.  Where will you get your servers from and at what cost? Will you even need servers in your data center?  And what will they look like, maybe massive collections of ARM processors?

As reported in The Register (Amazon cloud threatens the entire IT ecosystem): Amazon’s cloud poses a major threat to most of the traditional IT ecosystem, a team of 25 Morgan Stanley analysts write in a report, Amazon Web Services: Making Waves in the IT Pond, that was released recently. The Morgan Stanley researchers cite Brocade, NetApp, QLogic, EMC and VMware as facing the greatest challenges from the growth of AWS. The threat takes the form of AWS’s exceeding low cost per virtual machine instance.

Beyond the price threat, the vendors are scrambling to respond to the challenges of cloud, mobile, and big data/analytics. Even Intel, the leading chip maker, just introduced the 4th generation Intel® Core™ processor family to address these challenges.  The new chip promises optimized experiences personalized for end-users’ specific needs and offers double the battery life and breakthrough graphics targeted to new low cost devices such as mobile tablets and all-in-one systems.

The Wall Street Journal online covered related ground from a different perspective when it wrote: PC makers unveiled a range of unconventional devices on the eve of Asia’s biggest computer trade show as they seek to revive (the) flagging industry and stay relevant amid stiff competition. Driven by the cloud and the explosion of mobile devices in a variety of forms the enterprise IT industry doesn’t seem to know what the next device should even be.

Readers once chastised this blogger for suggesting that their next PC might be a mobile phone. Then came smartphones, quickly followed by tablets. Today PC sales are dropping fast, according to IDC.

The next rev of your data center may be based on ARM processors (tiny, stingy with power, cheap, cool, and remarkably fast), essentially mobile phone chips. They could be ganged together in large quantities to deliver mainframe-like power, scalability, and reliability at a fraction of the cost.

IBM has shifted its focus and is targeting cloud computing, mobile, and big data/analytics, even directing its acquisitions toward these areas as witnessed by yesterday’s SoftLayer acquisition. HP, Oracle, most of the other vendors are pursuing variations of the same strategy.  Oracle, for example, acquired Tekelec, a smart device signaling company.

But as the Morgan Stanley analysts noted, it really is Amazon using its cloud scale to savage the traditional enterprise IT vendor hardware strategies and it is no secret why:

  • No upfront investment
  • Pay for Only What You Use (with a caveat or two)
  • Price Transparency
  • Faster Time to Market
  • Near-infinite Scalability and Global Reach

And the more AWS grows, the more its prices drop due to the efficiency of cloud scaling.  It is not clear how the enterprise IT vendors will respond.

What will your management say when they get a whiff of AWS pricing. An extra large, high memory SQL Server database instance lists for $0.74 per hour (check the fine print). What does your Oracle database cost you per hour running on your on-premise enterprise server? That’s what the traditional enterprise IT vendors are facing.

Advertisements

, , , , , , , , , , , , , , , , , , , ,

Leave a comment

Data Center Density Wars

Moore’s Law—which states that the number of transistors on a chip will double approximately every two years—remains the driving impetus of the IT world and, increasingly, of the business world too. It certainly drives Intel, the industry leader, and its competitors and it has led to decades of data center price/performance density gains.

The ability to double the number of transistors on each chip boosts productivity and performance while cutting the cost per transistor and, in turn, driving down the cost the products that use them. Where once chips were about clock speed and numbers of transistors, today’s latest chips incorporate many more capabilities, from power reduction to graphics processing to cryptographic security and more. A tablet or smartphone like the one you’re probably carrying in your pocket will  have faster processing, more storage, greater display resolution, and consume far less energy than the standard desktop PC of a decade or more ago for about the same price.

For data center managers Moore’s Law translates into greater density at lower cost. They can shrink the required floor space and cut energy consumption while increasing the processing performance of the data center.  They can literally do more with less from a processing standpoint. (Unfortunately, for data center managers Moore’s Law does not apply to people.)

Intel appears intent to lead the industry to new processor heights.  Chip bloggers report that the company revealed an ambitious roadmap during its annual investor meeting. Specifically Intel plans to shrink its chip fabrication process down to a mere 5 nanometers (nm) sometime after 2015, with a 10nm process set to release in 2015 and work having already begun on the 7nm and 5nm processes. Typical processors today are in the 32nm range and Intel’s current top processor, Ivy Bridge, is 22nm. The next Intel chip, planned for 2013, will drop down to 14nm. Intel roadmap is here.

There are other chip makers, such as AMD, but the only other chip maker attracting this kind of attention today is ARM, which had shipped over 15 billion ARM processors as of Jan. 2011. Now ARM is aiming its chip designs beyond the mobile device market. According to analyst firm IHS iSuppli, by 2015, ARM integrated circuits are estimated to be in 23% of all laptops. ARMv7, its latest processor, mandates a hardware floating point unit, making it a full function chip.

The next step for ARM, ARMv8, already announced in 2011 but not yet available, represents the first fundamental change to the ARM design. It adds a 64-bit architecture and 64-bit instruction set. ARMv8 allows 32-bit applications to be executed in a 64-bit OS and for a 32-bit OS to be under the control of a 64-bit hypervisor.  Products using ARMv8, however, remain a way off.

Although chip trends are central to data center success, data centers don’t buy chips. They buy products from system vendors who package the chips together with other hardware, software, and services to deliver the devices they actually deploy, notes Mark Teter, CTO at Advanced Systems Group, a system integrator. Unless data center managers make a deeper dive than usual into a product’s specifications they won’t know how multiple threads are being handled or how core pipes are organized nor should they.  All they need care about are the capabilities, price/performance, density/footprint, and energy consumption. Let the chip vendors worry about the underlying details.

Still, what is shaping up on the chip front is the collision of two major chip camps, one conventional (Intel) and one mobile (ARM). As ARM extends into laptops, Intel has declared plans to focus on mobile chips. The shrinking of mobile chips down to 22nm and even 14nm is big. Watch this battle, like two galaxies colliding, shape up over the next few years.  Data centers can enjoy a spectacle that will only accelerate the benefits of Moore’s Law.

, , , , , , , , , ,

Leave a comment

Open Source KVM Takes on the Hypervisor Leaders

The hypervisor—software that allocates and manages virtualized system resources—usually is the first thing that comes to mind when virtualization comes up. And when IT considers server virtualization the first option typically is VMware ESX, followed by Microsoft’s Hyper-V.

But that shouldn’t be the whole story. Even in the Windows/Intel world there are other hypervisors, such as Citrix Xen.  And IBM has had hypervisor technology for its mainframes for decades and for its Power systems since the late 1990s. A mainframe (System z) running IBM’s System z hypervisor, z/VM, can handle over 1000 virtual machines while delivering top performance and reliability.

So, it was significant when IBM announced in early May that it and Red Hat, an open source technology leader, are working together to make products built around the Kernel-based Virtual Machine (KVM) open source hypervisor for the enterprise. Jean Staten Healy, IBM’s Director of Worldwide Cross-IBM Linux, told IT industry analysts that the two companies together are committed to driving adoption of the open source virtualization technology through joint development projects and enablement of the KVM ecosystem.

Differentiating this approach from those taken by the current x86 virtualization leaders VMware and Microsoft is open source technology. An open source approach to virtualization, Healy noted, lowers costs, enables greater interoperability, and increases options through multiple sources.

The KVM open source hypervisor allows a business to create multiple virtual versions of Linux and Windows environments on the same server. Larger enterprises can take KVM-based products and combine them with comprehensive management capabilities to create highly scalable and reliable, fully cloud-capable systems that enable the consolidation and sharing of massive numbers of virtualized applications and servers.

Red Hat Enterprise Virtualization, for example, was designed for large scale datacenter virtualization by pairing its centralized virtualization management system and advanced features with the KVM hypervisor. BottomlineIT looked at the Red Hat open source approach a few weeks ago, here.

The open source approach to virtualization also is starting to gain traction. To that end Red Hat, IBM, BMC, HP, Intel, and others joined to form the Open Virtualization Alliance. Its goal is to facilitate  the adoption of open virtualization technologies, especially KVM. It intends do this by promoting examples of customer successes, encourage interoperability, and accelerate the expansion of the ecosystem of third party solutions around KVM. A growing and robust ecosystem around KVM is essential if the open source hypervisor is to effectively rival VMware and Microsoft.

Healy introduced what might be considered the Alliance’s first KVM enterprise-scale success story, IBM’s own Research Compute Cloud (RC2), which is the first large-scale cloud deployed within IBM. In addition to being a proving ground for KVM, RC2 also handles actual IBM internal chargeback based on charges-per-VM hour across IBM. That’s real business work.

RC2 runs over 200 iDataplex nodes, an IBM x86 product, using KVM (90% memory utilization/node). It runs 2000 concurrent instances, is used by thousands of IBM employees worldwide, and provides 100TB of block storage attached to KVM instances via a storage cloud.

KVM was chosen not only to demonstrate the open source hypervisor but because it was particularly well suited to the enterprise challenge. It provides a predictable and familiar environment that required no additional skills, auditable security compliance, and the open source licensing model that kept costs down and would prove cost-effective for large-scale cloud use, which won’t be long in coming. The RC2 team, it seems, already is preparing live migration plans for support of federated clouds. Stay tuned.

, , , , , , , , , , , , , , , ,

1 Comment