ITIC: Home

Archive for April 2009

Two out of five businesses – 40% – report that their major business applications require higher availability rates than they did two or three years ago. However an overwhelming 81% are unable to quantify the cost of downtime and only a small 5% minority of businesses are willing to spend whatever it takes to guarantee the highest levels of application availability 99.99% and above. Those are the results of the latest ITIC survey which polled C-level executives and IT managers at 300 corporations worldwide.

ITIC partnered with Stratus Technologies in Maynard, Ma. a vendor that specializes in high availability and fault tolerant hardware and software solutions, to compose the Web-based survey. ITIC conducted this blind, non-vendor and non-product specific survey which polled businesses on their application availability requirements, virtualization and the compliance rate of their service level agreements (SLAs). None of the respondents received any remuneration. The Web-based survey consisted of multiple choice and essay questions. ITIC analysts also conducted two dozen first person customer interviews to obtain detailed anecdotal data.

Respondents ranged from SMBs with 100 users to very large enterprises with over 100,000 end users. Industries represented: academic, advertising, aerospace, banking, communications, consumer products, defense, energy, finance, government, healthcare, insurance, IT services, legal, manufacturing, media and entertainment, telecommunications, transportation, and utilities. None of the survey respondents received any remuneration for their participation. The respondents hailed from 15 countries; 85% were based in North America.

Survey Highlights

The survey results uncovered many “disconnects” between the levels of application reliability that corporate enterprises profess to need and the availability rates their systems and applications actually deliver. Additionally, a significant portion of the survey respondents had difficulty defining what constitutes high application availability; do not specifically track downtime and could not quantify or qualify the cost of downtime and its impact on their network operations and business.

Among the other survey highlights:

  • A 54% majority of IT managers and executives surveyed said more than two-thirds of their companies’ applications require the highest level of availability – 99.99% — or four nines of uptime.
  • Over half – 52% of survey respondents said that virtualization technology increases application uptime and availability; only 4% said availability decreased as a result of virtualization deployments.
  • In response to the question, “which aspect of application availability is most important” to the business, 59% of those polled cited the prevention of unplanned downtime as being most crucial; 40% said disaster recovery and business continuity were most important; 38% said that minimizing planned downtime to apply patches and upgrades was their top priority; 16% said the ability to meet SLAs was most important and 40% of the survey respondents said all of the choices were equally crucial to their business needs.
  • Some 41% said they would be satisfied with conventional 99% to 99.9% (the equivalent of two or three nines) availability for their most critical applications. Ninety-nine percent or 99.9% does not qualify as a high-availability or continuous-availability solution.
  • An overwhelming 81% of survey respondents said the number of applications that demand high availability has increased in the past two-to-three years.
  • Of those who said they have been unable to meet service level agreements (SLAs), 72% can’t or don’t keep track of the cost and productivity losses created by downtime.
  • Budgetary constraints are a gating factor prohibiting many organizations from installing software solutions that would improve application availability. Overall, 70% of the survey respondents said they lacked the funds to purchase value-added availability solutions (40%); or were unsure how much or if their companies would spend to guarantee application availability (30%).
  • Of the 30% of businesses that quantified how much their firms would spend on availability solutions, 3% indicated they would spend $2,000 to $4,000; 8% said $4,000 to $5,000; another 3% said $5,000 to $10,000; 11% — mainly large enterprises indicated they were willing to allocate $10,000 to $15,000 to ensure application availability and 5% said they would spend “whatever it takes.”

According to the survey findings, just under half of all businesses – 49% – lack the budget for high availability technology and 40% of the respondents reported they don’t understand what qualifies as high availability. An overwhelming eight out of 10 IT managers – 80% — are unable to quantify the cost of downtime to their C-level executives.

To reiterate, the ITIC survey polled users on the various aspects and impact of application availability and downtime but it did not specify any products or vendors.

The survey results supplemented by ITIC first person interviews with IT managers and C-level executives clearly shows that on a visceral level, businesses are very aware of the need for increased application availability has grown. This is particularly true in light of the emergence of new technologies like application and desktop virtualization, cloud computing, Service Oriented Architecture (SOA). The fast growing remote, mobile and telecommuting end user population utilizes unified communications and collaboration applications and utilities is also spurring the need for greater application availability and reliability.

High Application Availability Not a Reality for 80% of Businesses

The survey results clearly show that network uptime isn’t keeping pace with the need for application availability. At the same time, IT managers and C-level executives interviewed by ITIC did comprehend the business risks associated with downtime, even though most are unable to quantify the cost of downtime or qualify the impact to the corporation, its customers, suppliers and business partners when unplanned application and network outages occur.

“We are continually being asked to do more with less,” said an IT manager at a large enterprise in the Northeast. “We are now at a point, where the number of complex systems requiring expert knowledge has exceeded the headcount needed to maintain them … I am dreading vacation season,” he added.

Another executive at an Application Service provider acknowledged that even though his firm’s SLA guarantees to customers are a modest 98%, it has on occasion, been unable to meet those goals. The executive said his firm compensated one of its clients for a significant outage incident. “We had a half day outage a couple of years ago which cost us in excess of $40,000 in goodwill payouts to a handful of our clients, despite the fact that it was the first outage in five years,” he said.

Another user said a lack of funds prevented his firm from allocating capital expenditure monies to purchase solutions that would guarantee 99.99% application availability. “Our biggest concern is keeping what we have running and available. Change usually costs money, and at the moment our budgets are simply in survival mode,” he said.

Another VP of IT at a New Jersey-based business said that ignorance is not bliss. “If people knew the actual dollar value their applications and customers represent, they’d already have the necessary software availability solutions in place to safeguard applications,” he said. “Yes, it does cost money to purchase application availability solutions, but we’d rather pay now, then wait for something to fail and pay more later,” the VP of IT said.

Overall, the survey results show that the inability of users to put valid metrics and cost formulas in place to track and quantify what uptime means to their organization is woefully inadequate and many corporations are courting disaster.

ITIC advises businesses to track downtime, the actual cost of downtime to the organization and to take the necessary steps to qualify the impact of downtime including lost data, potential liability risks e.g. lost business, lost customers, potential lawsuits and damage to the company’s reputation. Once a company can quantify the amount of downtime associated with its main line of business applications, the impact of downtime and the risk to the business, it can then make an accurate assessment of whether or not its current IT infrastructure adequately supports the degree of application availability the corporation needs to maintain its SLAs.

These days just about every high technology vendor is “keen to be green.” However, few vendors can match IBM for its pioneering efforts and long term commitment to energy efficient solutions that are both good for the planet and good for recession racked enterprises.

This week, IBM took another giant step in its green data efforts. It officially launched its Dynamic Infrastructure for Energy Efficiency initiative, which is a comprehensive, compelling set of new hardware, software and services offerings designed to help customers build, manage and maintain more energy efficient infrastructures.

IBM’s Managing Dynamic Infrastructure for Energy Efficiency initiative serves as a blueprint for vendors and corporate customers to follow and emulate in their respective efforts to reduce power consumption, utility costs and their carbon footprints in the pursuit of greater system, application and network equipment economies of scale.

Declaring that “Environmental sustainability is an imperative for 21st Century business,” Rich Lechner, IBM’s VP of Energy & Environment, outlined IBM’s ambitious plan. Lechner and Chris O’Connor, VP of Tivoli Strategy, Product Management and SWG Green said that Big Blue worked with some 3,200 customers over the past two years to construct and validate metrics on energy usage and costs. Among the key findings from these efforts:

  • IT energy expenses are expected to increase 35% between 2009 and 2013
  • An overwhelming 80% of CEOs expect climate change regulations in five years
  • Buildings account for 40% of worldwide energy consumption

The company’s new products and services are the product of years of primary research and extensive research and development (R&D) in which the company has. spared no effort or expense in its quest to “go green” and assist its customers. It addresses the full spectrum of Green IT issues including: conservation, pollution prevention, consolidation and regulatory compliance initiatives for the physical devices and facilities and using renewable energy sources.

Managing Dynamic Infrastructure for Energy Efficiency

IBM’s Managing Dynamic Infrastructure for Energy Efficiency calls for corporations to build Green Infrastructures, Sustainable Solutions and Intelligent Systems. IBM’s plan is backed by a wide array of product offerings such as the Tivoli Monitoring for Energy Management and enhancements to the existing Tivoli Business Service Manager. IBM is offering customer a free trial of the Tivoli Monitoring for Energy Management.

The Tivoli Energy Management solution is supported by IBM hardware and IBM Global Services. The latter includes chargeback and accounting services and the ability to demonstrate to customers how to optimize assets (plant and facilities) and improve energy usage.

On the hardware front, IBM is embedding new capabilities in its x86 servers through consolidation which can result in an astounding 95% reduction of power compared to servers built three or four years ago.

IBM also has a Green Infrastructure ROI analysis tool. This is an interactive Web-based assessment toll that provides business with benchmarks on green/energy efficiency performance. It also provides the customers with specific recommendations to reduce energy consumption.

IBM also has a full set of services offerings to assist corporations in reviewing their current consumption and infrastructure and constructing customized plans for Green IT. IBM also has agreements in place with a number of technology partners – including Novell and Thunderhead – to deliver solutions that are certified to reduce environmental impact.

Going Green is Good Business

According to Lechner and O’Connor, Green IT initiatives will yield tangible benefits. Actual dollar value cost savings will vary according to the business and its specific cost cutting efforts. IBM customer Care2 for instance, cut energy consumption by 70% and reduced energy usage by 340 megawatt hours with proactive management. Another enterprise customer, Nationwide Insurance anticipates it will save $15 million (US dollars) over the next three years, including an 85% to 90% reduction in server utilization rates via virtualization and an 80% decrease in its environmental costs.

Not surprisingly, Lechner and O’Connor said that IBM practices what it preaches: IBM’s Austin facility achieved a 150% capacity increase while simultaneously cutting energy consumption by 25%. Those figures were good enough for the EPA to rank IBM’s Austin facility number 31 on its list of Greenest hardware vendors.

“Four years ago when we worked with clients [regarding energy efficiency] the discussion was academic,” Lechner said. “Now they want IBM to help them with Proof of Concept (POC) initiatives. The ROI for Green IT is two years or less,” he added.


IBM’s Managing Dynamic Infrastructure for Energy Efficiency is the real deal. It is the result of years of dedication and commitment. And it shows. As one of the founding developers of the Electronic Industry Code of Conduct (EICC) in 2004 IBM has always backed up its words with action. The EICC is a code of best practices adopted and implemented by some of the world’s major electronics brands and their suppliers. Its goal is to improve conditions in the electronics supply chain.

It is well known and well documented that demand for Green desktop and server hardware and services will increase significantly over the next one-to-five years. Governments, states, municipalities and utility firms are now offering consumers and businesses a mixture of incentives, backed by mandates to reduce costs, power consumption and produce hardware, whose material components won’t poison the planet when it comes time to discard and/or recycle them.

Green IT initiatives are rising sharply and it’s easy to see why. The energy used to process and route server requests and transactions will exceed 100 Billion kilowatts (kWh) at an annual cost of $7.4 Billion by the year 2011, according to the Environmental Protection Agency (EPA). PCs and servers are currently the biggest hogs consuming 60% of peak power even when idle!!! This is double the energy servers used in 2006!

Corporations have a choice: go green voluntarily or be compelled to do so by a slew of new regulations which are now being written into law. For example, one of the mandates of the Green Building Act of 2006 requires that commercial buildings in Washington, D.C. larger than 50,000 sq. ft. must meet or exceed New Construction standards by 2012. Others are voluntary like the Energy Policy Act of 2005. It allows building owners to realize a tax savings of $1.80 per sq. ft. for new commercial buildings that reduce regulated energy use by 50%.

ITIC’s own survey data indicates that 74% of corporate data centers face limitations and constraints on space, power consumption and the rising costs associated with energy and physical plant leasing/rentals. The obvious solution is to cut energy consumption and utility costs, which in turn, reduce carbon emissions and cut the greenhouse gases.

IBM’s Managing Dynamic Infrastructure for Energy Efficiency initiative is a well-conceived and powerful set of products and services. It solidifies IBM’s reputation and position as an energy efficiency pioneer. Few vendors can match IBM in this area. IBM is well positioned to help corporations achieve their goals of cutting costs, consolidating server hardware and physical plant space and ultimately becoming carbon neutral. Corporations are urged to examine IBM’s products and services and test them for themselves.