Uptime

Cost of Hourly Downtime Soars: 81% of Enterprises Say it Exceeds $300K On Average

The only good downtime is no downtime.

ITIC’s latest survey data finds that 98% of organizations say a single hour of downtime costs over $100,000; 81% of respondents indicated that 60 minutes of downtime costs their business over $300,000. And a record one-third or 33% of enterprises report that one hour of downtime costs their firms $1 million to over $5 million.

For the fourth straight year, ITIC’s independent survey data indicates that the cost of hourly downtime has increased. The average cost of a single hour of unplanned downtime has risen by 25% to 30% rising since 2008 when ITIC first began tracking these figures.

In ITIC’s 2013 – 2014 survey, just three years ago, 95% of respondents indicated that a single hour of downtime cost their company $100,000.  However, just over 50% said the cost exceeded $300,000 and only one in 10 enterprises reported hourly downtime costs their firms $1million or more. In ITIC’s latest poll three-in-10 businesses or 33% of survey respondents said that hourly downtime costs top $1 million or even $5 million.

Keep in mind that these are “average” hourly downtime costs. In certain use case scenarios — such as the financial services industry or stock transactions the downtime costs can conceivably exceed millions per minute. Additionally, an outage that occur in peak usage hours may also cost the business more than the average figures cited here. …

Cost of Hourly Downtime Soars: 81% of Enterprises Say it Exceeds $300K On Average Read More »

IBM z13s Delivers Power, Performance, Fault Tolerant Reliability and Security for Hybrid Clouds

Security. Reliability. Performance. Analytics. Services.

These are the most crucial considerations for corporate enterprises in choosing a hardware platform. The underlying server hardware functions as the foundational element for the business’ entire infrastructure and interconnected environment. Today’s 21st century Digital Age networks are characterized by increasingly demand-intensive workloads; the need to use Big Data analytics to analyze and interpret the massive volumes and variety of data to make proactive decisions and keep the business competitive. Security is a top priority. It’s absolutely essential to safeguard sensitive data and Intellectual Property (IP) from sophisticated, organized external hackers and defend against threats posed by internal employees.

The latest IBM z13s enterprise server delivers embedded security, state-of-the-art analytics and unparalleled reliability, performance and throughput. It is fine tuned for hybrid cloud environments. And it’s especially useful as a secure foundational element in Internet of Things (IoT) deployments. The newly announced, z13s is highly robust: it supports the most compute-intensive workloads in hybrid cloud and on-premises environments. The newest member of the z Systems family, the z13s, incorporates advanced, embedded cryptography features in the hardware that allow it to encrypt and decrypt data twice as fast as previous generations, with no reduction in transactional throughput owing to the updated cryptographic coprocessor for every chip core and tamper-resistant hardware-accelerated cryptographic coprocessor cards. …

IBM z13s Delivers Power, Performance, Fault Tolerant Reliability and Security for Hybrid Clouds Read More »

IBM, Lenovo Top ITIC 2016 Reliability Poll; Cisco Comes on Strong

IBM Power Systems Servers Most Reliable for Seventh Straight Year; Lenovo x86 Servers Deliver Highest Uptime/Availability among all Intel x86-based Systems; Cisco UCS Stays Strong; Dell Reliability Ratchets Up; Intel Xeon Processor E7 v3 chips incorporate advanced analytics; significantly boost reliability of x86-based servers

In 2016 and beyond, infrastructure reliability is more essential than ever.

The overall health of network operations, applications, management and security functions all depend on the core foundational elements: server hardware, server operating systems and virtualization to deliver high availability, robust management and solid security. The reliability of the server, server OS and virtualization platforms are the cornerstones of the entire network infrastructure. The individual and collective reliability of these platforms have a direct, immediate and long lasting impact on daily operations and business results. For the seventh year in a row, corporate enterprise users said IBM server hardware delivered the highest levels of reliability/uptime among 14 server hardware and 11 different server hardware virtualization platforms. A 61% majority of IBM Power Systems servers and Lenovo System x servers achieved “five nines” or 99.999% availability – the equivalent of 5.25 minutes of unplanned per server /per annum downtime compared to 46% of Hewlett-Packard servers and 40% of Oracle server hardware. …

IBM, Lenovo Top ITIC 2016 Reliability Poll; Cisco Comes on Strong Read More »

IBM z/OS, IBM AIX, Debian and Ubuntu Score Highest Security Ratings

Eight out of 10 — 82% — of the over 600 respondents to ITIC’s 2014-2015 Global Server Hardware and Server OS Reliability survey say security issues negatively impact overall server, operating system and network reliability. Of that figure a 53% majority of those polled say that security vulnerabilities and hacks have a “moderate,” “significant” or “crucial impact on network availability and uptime (See Exhibit 1).

Overall, the latest ITIC survey results showed that organizations are still more reactive than proactive regarding security threats. Some 15% of the over 600 global corporate respondents are extremely lax: some seven percent said that security issues have no impact on their environment while another eight percent indicated that they don’t keep track of whether or not security issues negatively affect the uptime and availability of their networks. In contrast, 24% of survey participants or one-in-four said security has a “significant” or “crucial” negative impact on network reliability and performance.

Still, despite the well documented and high profile hacks into companies like Target, eBay, Google and other big name vendors this year, the survey found that seven-out-of-10 firms – 70% – are generally confident in the security of their hardware, software and applications – until they get hacked. …

IBM z/OS, IBM AIX, Debian and Ubuntu Score Highest Security Ratings Read More »

IBM Platform Resource Scheduler Automates, Accelerates Cloud Deployments

One of the most daunting and off-putting challenges for any enterprise IT department is how to efficiently plan and effectively manage cloud deployments or upgrades while still maintaining the reliability and availability of the existing infrastructure during the rollout.

IBM solves this issue with its newly released Platform Resource Scheduler which is part of the company’s Platform Computing portfolio and an offering within the IBM Software Defined Environment (SDE) vision for next generation cloud automation. The Platform Resource Scheduler is a prescriptive set of services designed to ensure that enterprise IT departments get a trouble-free transition to a private, public or private cloud environment by automating the most common placement and policy procedures of their virtual machines (VMs). It also helps guarantee quality of service while greatly reducing the most typical human errors that occur when IT administrators manually perform tasks like load balancing and memory balancing. The Platform Resource Scheduler is sold with IBM’s SmartCloud Orchestrator and PowerVC and is available as an add-on with IBM SmartCloud Open Stack Entry products. It also features full compatibility with Nova APIs and fits into all IBM OpenStack environments. It is built on open APIs, tools and technologies to maximize client value, skills availability and easy reuse across hybrid cloud environments. It supports heterogeneous (both IBM and non-IBM) infrastructures and runs on Linux, UNIX and Windows as well as IBM’s zOS operating systems. …

IBM Platform Resource Scheduler Automates, Accelerates Cloud Deployments Read More »

Does Infrastructure Really Matter When it Comes to IT Security?

Yes, infrastructure absolutely does matter and has a profound and immediate impact on enterprise security.

Server hardware (and the server operating systems and applications that run on them) form the bedrock upon which the performance, reliability and functionality of the entire infrastructure rests. Just as you wouldn’t want to build a house on quicksand, you don’t want your infrastructure to be shaky or suspect: it will undermine security, network operations, negatively impact revenue, raise the risk of litigation and potentially cause your firm to lose business.

And that’s just the tip of the iceberg. These days, many if not most corporate enterprises have extranets to facilitate commerce and communications amongst their customers, business partners and suppliers. Any weak link in infrastructure security has the potential to become a gaping hole, allowing a security breach to extend beyond the confines of the corporate network and extranet. Security breaches can infect and invade other networks with astounding rapidity.

Increasingly, aging and inadequate infrastructure adversely impacts enterprise security. …

Does Infrastructure Really Matter When it Comes to IT Security? Read More »

Two-Thirds of Corporations Now Require 99.99% Database Uptime, Reliability

A 64% majority of organizations now require that their databases deliver a minimum of four, “nines” of uptime 99.99% or better for their most mission critical applications . That is the equivalent of 52 minutes of unplanned downtime per database/per annum or just over one minute of downtime per week as a result of an unplanned outage.

Those are the results of ITIC’s 2013 – 2014 Database Reliability and Deployment Trends Survey, an independent Web-based survey which polled 600 organizations worldwide during May/June 2013. The nearly two-thirds of respondents who indicated they need 99.99% or greater availability is a 10% increase over the 54% who said they required a minimum of four nines reliability in ITIC’s 2011-2012 Database Reliability survey.

This trend will almost certainly continue unabated owing in large part to an increase in mainstream user deployments of databases running Big Data Analytics, Business Intelligence (BI), Customer Relationship Management (CRM) and Enterprise Resource Planning (ERP) applications. These applications are data intensive and closely align with organizations’ main-line-of-business and recurring revenue stream. Hence, any downtime on a physical, virtual or cloud-based DB will likely cause immediate disruptions that will quickly impact the corporation’s bottom line. …

Two-Thirds of Corporations Now Require 99.99% Database Uptime, Reliability Read More »

Microsoft: Bullish or Bottoming Out? Part 2

According to some press and industry, you’d think that Microsoft was all but dead. Microsoft’s tactical and strategic technology and business missteps are well publicized and dissected ad infinitum. Less well documented are Microsoft’s strengths from both a consumer and enterprise perspective and there are plenty of those.

Microsoft Strengths

One of the most notable company wins in the past five years is the Xbox 360 and Kinect.

Xbox 360 and Kinect: Simply put, this is an unqualified success. The latest statistics released earlier this month by the NPD Group show that Microsoft has a 47% market share and sold 257,000 Xbox 360 units in the U.S. in June, besting its rivals the Sony PlayStation 3 and Nintendo Wii for the 18th consecutive month. But Microsoft and indeed all the hardware games vendors find their sales shrinking due to the sharp increase in the numbers of users playing games on their smart phones. In Microsoft’s 2012 third fiscal quarter ending in March, Xbox 360 sales dropped 33% to $584 million. The consumer space is notoriously fickle and games users are always looking for the next big thing. Microsoft’s ace in the hole is the Kinect motion-controller, which still has a lot of appeal. The company is banking on that as well as slew of new applications and functions like the Kinect PlayFit Dashboard which lets users track the number of calories they burn when they play Kinect games. …

Microsoft: Bullish or Bottoming Out? Part 2 Read More »

Application Availability, Reliability and Downtime: Ignorance is NOT Bliss

Two out of five businesses – 40% – report that their major business applications require higher availability rates than they did two or three years ago. However an overwhelming 81% are unable to quantify the cost of downtime and only a small 5% minority of businesses are willing to spend whatever it takes to guarantee the highest levels of application availability 99.99% and above. Those are the results of the latest ITIC survey which polled C-level executives and IT managers at 300 corporations worldwide.

ITIC partnered with Stratus Technologies in Maynard, Ma. a vendor that specializes in high availability and fault tolerant hardware and software solutions, to compose the Web-based survey. ITIC conducted this blind, non-vendor and non-product specific survey which polled businesses on their application availability requirements, virtualization and the compliance rate of their service level agreements (SLAs). None of the respondents received any remuneration. The Web-based survey consisted of multiple choice and essay questions. ITIC analysts also conducted two dozen first person customer interviews to obtain detailed anecdotal data.

Respondents ranged from SMBs with 100 users to very large enterprises with over 100,000 end users. Industries represented: academic, advertising, aerospace, banking, communications, consumer products, defense, energy, finance, government, healthcare, insurance, IT services, legal, manufacturing, media and entertainment, telecommunications, transportation, and utilities. None of the survey respondents received any remuneration for their participation. The respondents hailed from 15 countries; 85% were based in North America.

Survey Highlights

The survey results uncovered many “disconnects” between the levels of application reliability that corporate enterprises profess to need and the availability rates their systems and applications actually deliver. Additionally, a significant portion of the survey respondents had difficulty defining what constitutes high application availability; do not specifically track downtime and could not quantify or qualify the cost of downtime and its impact on their network operations and business.

Among the other survey highlights:

  • A 54% majority of IT managers and executives surveyed said more than two-thirds of their companies’ applications require the highest level of availability – 99.99% — or four nines of uptime.
  • Over half – 52% of survey respondents said that virtualization technology increases application uptime and availability; only 4% said availability decreased as a result of virtualization deployments.
  • In response to the question, “which aspect of application availability is most important” to the business, 59% of those polled cited the prevention of unplanned downtime as being most crucial; 40% said disaster recovery and business continuity were most important; 38% said that minimizing planned downtime to apply patches and upgrades was their top priority; 16% said the ability to meet SLAs was most important and 40% of the survey respondents said all of the choices were equally crucial to their business needs.
  • Some 41% said they would be satisfied with conventional 99% to 99.9% (the equivalent of two or three nines) availability for their most critical applications. Ninety-nine percent or 99.9% does not qualify as a high-availability or continuous-availability solution.
  • An overwhelming 81% of survey respondents said the number of applications that demand high availability has increased in the past two-to-three years.
  • Of those who said they have been unable to meet service level agreements (SLAs), 72% can’t or don’t keep track of the cost and productivity losses created by downtime.
  • Budgetary constraints are a gating factor prohibiting many organizations from installing software solutions that would improve application availability. Overall, 70% of the survey respondents said they lacked the funds to purchase value-added availability solutions (40%); or were unsure how much or if their companies would spend to guarantee application availability (30%).
  • Of the 30% of businesses that quantified how much their firms would spend on availability solutions, 3% indicated they would spend $2,000 to $4,000; 8% said $4,000 to $5,000; another 3% said $5,000 to $10,000; 11% — mainly large enterprises indicated they were willing to allocate $10,000 to $15,000 to ensure application availability and 5% said they would spend “whatever it takes.”

According to the survey findings, just under half of all businesses – 49% – lack the budget for high availability technology and 40% of the respondents reported they don’t understand what qualifies as high availability. An overwhelming eight out of 10 IT managers – 80% — are unable to quantify the cost of downtime to their C-level executives.

To reiterate, the ITIC survey polled users on the various aspects and impact of application availability and downtime but it did not specify any products or vendors.

The survey results supplemented by ITIC first person interviews with IT managers and C-level executives clearly shows that on a visceral level, businesses are very aware of the need for increased application availability has grown. This is particularly true in light of the emergence of new technologies like application and desktop virtualization, cloud computing, Service Oriented Architecture (SOA). The fast growing remote, mobile and telecommuting end user population utilizes unified communications and collaboration applications and utilities is also spurring the need for greater application availability and reliability.

High Application Availability Not a Reality for 80% of Businesses

The survey results clearly show that network uptime isn’t keeping pace with the need for application availability. At the same time, IT managers and C-level executives interviewed by ITIC did comprehend the business risks associated with downtime, even though most are unable to quantify the cost of downtime or qualify the impact to the corporation, its customers, suppliers and business partners when unplanned application and network outages occur.

“We are continually being asked to do more with less,” said an IT manager at a large enterprise in the Northeast. “We are now at a point, where the number of complex systems requiring expert knowledge has exceeded the headcount needed to maintain them … I am dreading vacation season,” he added.

Another executive at an Application Service provider acknowledged that even though his firm’s SLA guarantees to customers are a modest 98%, it has on occasion, been unable to meet those goals. The executive said his firm compensated one of its clients for a significant outage incident. “We had a half day outage a couple of years ago which cost us in excess of $40,000 in goodwill payouts to a handful of our clients, despite the fact that it was the first outage in five years,” he said.

Another user said a lack of funds prevented his firm from allocating capital expenditure monies to purchase solutions that would guarantee 99.99% application availability. “Our biggest concern is keeping what we have running and available. Change usually costs money, and at the moment our budgets are simply in survival mode,” he said.

Another VP of IT at a New Jersey-based business said that ignorance is not bliss. “If people knew the actual dollar value their applications and customers represent, they’d already have the necessary software availability solutions in place to safeguard applications,” he said. “Yes, it does cost money to purchase application availability solutions, but we’d rather pay now, then wait for something to fail and pay more later,” the VP of IT said.

Overall, the survey results show that the inability of users to put valid metrics and cost formulas in place to track and quantify what uptime means to their organization is woefully inadequate and many corporations are courting disaster.

ITIC advises businesses to track downtime, the actual cost of downtime to the organization and to take the necessary steps to qualify the impact of downtime including lost data, potential liability risks e.g. lost business, lost customers, potential lawsuits and damage to the company’s reputation. Once a company can quantify the amount of downtime associated with its main line of business applications, the impact of downtime and the risk to the business, it can then make an accurate assessment of whether or not its current IT infrastructure adequately supports the degree of application availability the corporation needs to maintain its SLAs.

Application Availability, Reliability and Downtime: Ignorance is NOT Bliss Read More »

Scroll to Top