Archive for October, 2012

Cybercrime Costs More Than You Think

 As CIO you probably don’t break out the cost of cybercrime. Of course you tally security costs as part of the IT budget, but unless you have been hit by a large and readily apparent cyber attack the specific cost probably is not on your radar screen.

Cybercrime is a form of criminal activity using computers over the Internet—that’s where the cyber comes in. It includes anything from downloading illegal music files to stealing millions of dollars from online bank accounts. Cybercrime also includes non-monetary attacks, such as creating and distributing viruses and deploying malware on other computers, posting confidential business information on the Internet, or distributed denial of service (DDOS) attacks. Maybe the most apparent form of cybercrime is identity theft—apparent mainly because of numerous state laws and various government regulations addressing privacy and identity theft. But any organization that has been hit with a computer virus has experienced cybercrime.

This week HP published the latest research indicating that the cost and frequency of cybercrime have both continued to rise for the third straight year. According to this third annual study of U.S. companies, conducted by the Ponemon Institute the occurrence of cyberattacks has more than doubled over a three-year period, while the financial impact has increased by nearly 40 percent.

A few weeks ago, IBM released its latest quarterly X-Force security report. Specifically, it found a sharp increase in browser related exploits, renewed concerns around social media password security and continued disparity in mobile devices and corporate bring-your-own-device (BYOD) programs.

The HP/Ponemon report found the average annualized cost of cybercrime incurred by a benchmark sample of U.S. organizations was $8.9 million. This represents a 6% increase over the average cost reported in 2011, and a 38% increase over 2010. The 2012 study also revealed a 42% increase in the number of cyberattacks, with organizations experiencing an average of 102 successful attacks per week, compared to 72 attacks per week in 2011 and 50 attacks per week in 2010. The only positive news here is that the cost of the attacks is not increasing as fast as the number of attacks, but that probably small consolation.

The most costly cybercrimes, HP noted, continue to be those caused by malicious code, denial of service, stolen or hijacked devices, and malevolent insiders. When combined, these account for more than 78% of annual cybercrime costs per organization. Maybe even more disturbing is that many of losses resulted from careless behavior (i.e. leaving a laptop on a taxi seat) by employees or poor employee relations, which motivate some of the malevolent attacks.

Cyber attacks can be costly if not resolved quickly, HP concluded. The average time to resolve a cyber attack is 24 days, but it can take up to 50 days according to this year’s study. The average cost incurred during this 24-day period was $591,780, up 42% over the previous year.

IBM’s X-Force also reported some new disturbing trends. For example, attackers continue to target specific individuals by directing them to a trusted URL or site that has been injected with malicious code. Then, through browser vulnerabilities, the attackers are able to install the malware on the target system. Sadly, X-Force notes, the websites of many well-established and trustworthy organizations are still susceptible to these types of threats.  Similarly the growth of SQL injection, a technique used by attackers to access a database through a website, is keeping pace with the increased usage of cross-site scripting and directory traversal commands.

As computing penetrates into every aspect of business the security threats are only going to increase.  Traditional IT security—access controls, user authentication, firewalls, perimeter defense, and anti-virus tools—simply are not sufficient for the variety of threats companies are experiencing today, from socially engineered attacks to APT.  For that reason organizations need an ongoing security strategy that encompasses everything—GRC, data, applications, networks, systems, storage, mobile, cloud, social networks, and whatever else may come next. And then drive it all home through policy, repeated training, and insistence on accountability.

, , , , , , , , , , , , ,

Leave a comment

Data Center Density Wars

Moore’s Law—which states that the number of transistors on a chip will double approximately every two years—remains the driving impetus of the IT world and, increasingly, of the business world too. It certainly drives Intel, the industry leader, and its competitors and it has led to decades of data center price/performance density gains.

The ability to double the number of transistors on each chip boosts productivity and performance while cutting the cost per transistor and, in turn, driving down the cost the products that use them. Where once chips were about clock speed and numbers of transistors, today’s latest chips incorporate many more capabilities, from power reduction to graphics processing to cryptographic security and more. A tablet or smartphone like the one you’re probably carrying in your pocket will  have faster processing, more storage, greater display resolution, and consume far less energy than the standard desktop PC of a decade or more ago for about the same price.

For data center managers Moore’s Law translates into greater density at lower cost. They can shrink the required floor space and cut energy consumption while increasing the processing performance of the data center.  They can literally do more with less from a processing standpoint. (Unfortunately, for data center managers Moore’s Law does not apply to people.)

Intel appears intent to lead the industry to new processor heights.  Chip bloggers report that the company revealed an ambitious roadmap during its annual investor meeting. Specifically Intel plans to shrink its chip fabrication process down to a mere 5 nanometers (nm) sometime after 2015, with a 10nm process set to release in 2015 and work having already begun on the 7nm and 5nm processes. Typical processors today are in the 32nm range and Intel’s current top processor, Ivy Bridge, is 22nm. The next Intel chip, planned for 2013, will drop down to 14nm. Intel roadmap is here.

There are other chip makers, such as AMD, but the only other chip maker attracting this kind of attention today is ARM, which had shipped over 15 billion ARM processors as of Jan. 2011. Now ARM is aiming its chip designs beyond the mobile device market. According to analyst firm IHS iSuppli, by 2015, ARM integrated circuits are estimated to be in 23% of all laptops. ARMv7, its latest processor, mandates a hardware floating point unit, making it a full function chip.

The next step for ARM, ARMv8, already announced in 2011 but not yet available, represents the first fundamental change to the ARM design. It adds a 64-bit architecture and 64-bit instruction set. ARMv8 allows 32-bit applications to be executed in a 64-bit OS and for a 32-bit OS to be under the control of a 64-bit hypervisor.  Products using ARMv8, however, remain a way off.

Although chip trends are central to data center success, data centers don’t buy chips. They buy products from system vendors who package the chips together with other hardware, software, and services to deliver the devices they actually deploy, notes Mark Teter, CTO at Advanced Systems Group, a system integrator. Unless data center managers make a deeper dive than usual into a product’s specifications they won’t know how multiple threads are being handled or how core pipes are organized nor should they.  All they need care about are the capabilities, price/performance, density/footprint, and energy consumption. Let the chip vendors worry about the underlying details.

Still, what is shaping up on the chip front is the collision of two major chip camps, one conventional (Intel) and one mobile (ARM). As ARM extends into laptops, Intel has declared plans to focus on mobile chips. The shrinking of mobile chips down to 22nm and even 14nm is big. Watch this battle, like two galaxies colliding, shape up over the next few years.  Data centers can enjoy a spectacle that will only accelerate the benefits of Moore’s Law.

, , , , , , , , , ,

Leave a comment

IBM Bolsters Its Midrange Power Servers

IBM’s Systems and Technology Group (STG) introduced a slew of new products and enhancements this week, both hardware and software, for its midrange server lineup, the Power Systems products. The Power announcements covered new capabilities as well as new machines. And all the announcements in one way or another addressed IBM’s current big themes: Cloud, Analytics, and Security. The net-net of all these announcements: more bang for the buck in terms of performance, flexibility, and efficiency.

Of the new Power announcements the newest processor, Power7+, certainly was the star.  Other capabilities, such as elastic capacity on demand and dynamic Power system pools, may prove more important in the long run. Another new announcement, the EXP30 Ultra SSD I/O Drawer, may turn out quite useful as organizations appreciate the possibilities of SSD and ramp up usage.

Power7+, with 2 billion transistors, promises to deliver 40% more performance, especially for Java workloads, compared to Power7.  Combined with other enhancements Power announced, it looks particularly good for data and even real-time analytics workloads.  The new processor boasts 4.4 GHz speeds, a 10MB L3 cache per core (8 cores =80 MB), and a random number generator along with enhanced single precision floating point performance and an enhanced GX system bus. IBM invested the additional transistors primarily in the cache. All of this will aid performance and efficiency.

The enhanced chip also brings an active memory expansion accelerator and an on-chip encryption accelerator for AIX. Previously this was handled in software; now it is done in hardware for better performance and efficiency.  Power7+ also can handle 20 VMs per core, double the number of Power7 VMs. This allows system administrators to make VM partitions, especially development partitions, quite small (just 5% of the core).  With energy enhancements, it also delivers 5x more performance per watt. New power gating also allows the chip to be configured in a variety of ways. The upshot: more flexibility.

Other new capabilities include elastic Capacity on Demand (CoD) and Power System Pools work hand in hand.  Depending on the server model, you can create what amounts to a huge pool of shared system resources, either permanently or temporarily. IBM has offered versions of capacity on demand for years, but they typically entailed elaborate set up and cumbersome activation routines to make the capacity available.  Again, depending on the model IBM is promising more flexible CoD and easier activation, referring to it as instant elasticity.  If it works as described, you should be able to turn multiple Power servers into a massive shared resource. Combine these capabilities to create a private cloud based on these new servers and you could end up with a rapidly expandable private cloud. Usually, it would take a hybrid cloud for that kind of expansion, and even that is not necessarily simple to set up.

There are, however, limitations to elastic CoD and Power Systems Pool. An initial quantity of CoD credits are offered only with new Power 795 and Power 780 (a Power7+ machine). There also is a limit of 10 Power 795 and/or 780 servers in a pool.

Enterprises are just starting to familiarize themselves with SSD, what it can do for them, and how best to deploy. The EXP30 Ultra SSD I/O Drawer, scheduled for general release in November, should make it easier to include SSD in an enterprise infrastructure strategy using the GX++ bus. The 1U drawer can hold up to 30 SSD drives (387 GB) in that small footprint.  That’s a lot of resource in a tight space: 11.6 TB of capacity, 480,000 read IOPS, and 4.5 GB/s of aggregate bandwidth. IBM reports that it can cut batch window processing by up to 50% and reduce the number of HDD by up to 10x. Plus, you can still attach up to 48 HDD downstream for another 43 TB.

And these just touch on some of what IBM packed into the Oct. 3 announcement. BottomlineIT will look at other pieces of the Power announcement, from enhancements of PowerVM to PowerSC for security and compliance as well as look at the enhancements made to zEC12 software. Stay tuned.

, , , , , ,

Leave a comment