Search Results for: downtime

Security, Data Breaches Top Cause of Downtime in 2022

A 76% majority of corporations cite Security and Data Breaches as the top cause of server, operating system, application and network downtime, according to ITIC’s latest 2022 Global Server Hardware Security survey which polled 1,300 businesses worldwide.

Security is a technology and business issue that impacts all enterprises. Some 76% of respondents cited security and data breaches as the greatest threat to server, application, data center, network edge and cloud ecosystem stability and reliability (See Exhibit 1). This is a three-fold increase from the 22% of ITIC corporate survey respondents who said security negatively impacted server and network uptime reliability in 2012. The hacks are more targeted, pervasive and pernicious. They are also more expensive and designed to inflict maximum damage and losses on their enterprise and consumer victims.

 

 

Security has a major impact on businesses of all sizes and across all vertical markets. In 2022 nine-in-10 companies estimate that server hardware and server OS security have a significant impact on overall network reliability and daily transactions (See Exhibit 2).

Mean Time to Detection is a Critical Barometer

 

Security hacks and data breaches are a fact of doing business in the digital age.  It’s BIG business for hackers and cyber criminals. At some point, every organization and its critical main line of business servers, server operating systems and applications will be the victims of an attempted or successful data breach of some type.

Data Breaches and Downtime Costs Soar

In 2021 the average cost of a successful data breach increased to $4.24 million (USD); this is a 10% increase from $3.86 million in 2020, according to the 2021 Cost of a Data Breach Study, jointly conducted by IBM and the Ponemon Institute. The $4.24 million average cost of a single data breach is the highest number in the 17 years since IBM and Pokemon began conducting the survey. It represents an increase of 10% in the last 12 months and 20% over the last two years.

The FBI’s 2021 Internet Crime Report, released in March 2022, found that Internet cyber crimes cost Americans $6.9 billion last year. This is more than triple the $2 billion in losses reported in 2020. According to the FBI, it received 847,376 complaints of suspected internet crime; this is a seven percent (7%) compared to 2020.

The FBI 2021 Internet Crime Report said the top three cyber crimes reported by victims in 2021 were: “phishing scams, non-payment/non-delivery scams, and personal data breach. Victims lost the most money to business email compromise scams, investment fraud, and romance and confidence schemes.”

ITIC’s 2022 Global Server Hardware Security survey findings underscore the expensive nature of cyber crime. ITIC’s latest research shows the Hourly Cost of Downtime now exceeds $300,000 for 91% of SME and large enterprises. Overall, 44% of mid-sized and large enterprise survey respondents reported that a single hour of downtime, can potentially cost their businesses over one million ($1 million).

Organizations must rely on strong embedded server and infrastructure security that recognizes the danger, sends alerts and alarms and that possess the ability to isolate the threats. Strong preparedness on the part of the corporation and having a well trained staff of security professionals and IT administrators are of paramount importance.

The more quickly the company’s servers and software can detect a security issue and respond to it, the greater the chances of isolating and thwarting the attack before it can infiltrate the network ecosystem, interrupt data transactions and daily operations and access sensitive data and IP.

Robust security is comprised of two things: solid security products AND strong security policies and procedures administered and monitored by proactive and trained security professionals.

 

Security, Data Breaches Top Cause of Downtime in 2022 Read More »

44% of enterprises say hourly downtime costs top $1 million—with COVID-19, security hacks and remote working as driving factors

https://techchannel.com/Enterprise/02/2019/cost-enterprise-downtime?microsite=HA-DR-For-Your-Business

44% of enterprises say hourly downtime costs top $1 million—with COVID-19, security hacks and remote working as driving factors

44% of enterprises say hourly downtime costs top $1 million—with COVID-19, security hacks and remote working as driving factors Read More »

Forty Percent of Enterprises Say Hourly Downtime Costs Top $1Million

Four in 10 enterprise organizations – 40% – indicate that a single hour of downtime can now cost their firms from $1 million to over $5 million – exclusive of any legal fees, fines or penalties.

Those are the results of ITIC’s 11th annual Hourly Cost of Downtime Survey.  ITIC polled 1,000 businesses from March through June 2020. All categories of businesses were represented in the survey respondent pool: 27% were small/midsized (SMB) firms with up to 200 users; 28% came from the small/midsized (SME) enterprise sector with 201 to 1,000 users and 45% were large enterprises with over 1,000 users. data indicates that over 98% of large enterprises with more than 1000 employees say that on average, a single hour of downtime per year costs their company over $100,000. These statistics represent the “average” hourly cost of downtime.  In a worst case scenario – such as a catastrophic outage that occurs during peak usage times or an event that disrupts a crucial business transaction – the monetary losses to the organization can reach and even exceed millions per minute.

Once again, as in ITIC’s 2019 Hourly Cost of Downtime poll, only a tiny two percent minority of respondents — mainly very small businesses with fewer than 50 employees – reported that downtime costs their companies less than $100,000 in a single 60-minute time period. Downtime costs are also expensive for SMBs with 200 to 500 employees. Nearly half – 47% – of SMB survey respondents estimate that a single hour of downtime can easily cost their firms $100,000 or more in lost revenue, end user productivity and remedial action by IT administrators. To reiterate, these figures are exclusive of penalties, and any ensuing monetary awards that are the result of litigation, civil or criminal non-compliance penalties.

It’s easy to underestimate the cost of downtime, but it adds up quickly. For example: one minute of downtime for a single server in a company that calculates its hourly cost of downtime for a mission critical server or application at $100,000 is $1,667. The overwhelming majority of firms will have multiple servers impacted in an outage — particularly if those servers are located in the cloud or a virtualized environment. That $100,000 of hourly downtime calculation of $1,667 per minute for a single server quickly grows to $16,670 per minute when downtime affects 10 servers and main line of business applications/data assets!  Downtime costs add up quickly for corporate enterprises. And once again, these are just the costs of the actual downtime. It does not factoring in any lost, damaged, stolen, destroyed or changed data.

Small businesses are equally at risk, even if their potential downtime statistics are a fraction of large enterprises.  For example, an SMB company that estimates that one hour of downtime “only” costs the firm $10,000 could still incur a cost of $167 for a single minute of per server downtime. Similarly, an SMB company that assumes that one hour of downtime costs the business $25,000 could still potentially lose an estimated $417 per server/per minute. Very small SMBs – companies with 1 to 100 employees – generally would not rack up hourly downtime costs of hundreds of thousands or millions in hourly losses. Small companies however, typically lack the deep pockets, larger budgets and reserve funds of their enterprise counterparts to absorb financial losses associated with downtime.

Hourly downtime costs of $25,000; $50,000 or $75,000 (exclusive of litigation or civil and even criminal penalties) may be severe enough to put the SMB out of business – or severely damage its reputation and cause it to lose business.

ITIC’s latest Hourly Cost of Downtime survey revealed that for large enterprises, the price tag associated with a 60 minute outage is much steeper: it routinely tops the $5 Million (USD) mark for the top 10 verticals. These include: Banking/Finance; Food; Energy; Government; Healthcare; Manufacturing; Media & Communications; Retail; Transportation and Utilities.

These highly regulated vertical industries must also factor in the potential losses related to litigation. Businesses may also be liable for civil penalties stemming from their failure to meet Service Level Agreements (SLAs) or Compliance Regulations. Moreover, for select organizations, whose businesses are based on compute-intensive data transactions, like stock exchanges or utilities, losses may be calculated in millions of dollars per minute.

ITIC’s 11th annual Hourly Cost of Downtime Survey,  conducted in conjunction with the ITIC 2020 Global Server Hardware Server OS Reliability Survey – found that an 87% majority of organizations now require a minimum of 99.99% availability. This is up from 81% in the last 2 ½ years. The so-called 99.99% or “four nines” of reliability equals 52 minutes of unplanned per server/per annum downtime for mission critical systems and applications or, 4.33 minutes of unplanned monthly outages for servers, applications and networks.

Overall, hourly downtime costs will continue to soar. And this means that companies of all sizes across all vertical markets will have little or no tolerance for downtime.

 

Forty Percent of Enterprises Say Hourly Downtime Costs Top $1Million Read More »

Hourly Downtime Costs Rise: 86% of Firms Say One Hour of Downtime Costs $300,000+; 34% of Companies Say One Hour of Downtime Tops $1Million

Hourly downtime costs continue to increase for all businesses irrespective of size or vertical market. This trend has been evident over the last five to seven years. ITIC’s latest 2019 Global Server Hardware, Server OS Reliability Survey, which polled over 1,000 businesses worldwide from November 2018 through January 2019, found that a single hour of downtime now costs 98% of firms at least $100,000. And 86% of businesses say that the cost for one hour of downtime is $300,000 or higher; this is up from 76% in 2014 and 81% of respondents in 2018 who said that their company’s hourly downtime losses topped $300,000. Additionally, ITIC’s latest 2019 study indicates that one-in-three organizations – 34% – say the cost of a single hour of downtime can reach $1 million to over $5 million. These statistics are exclusive of any litigation, fines or civil or criminal penalties that may subsequently arise due to lawsuits or regulatory non-compliance issues.

Given organizations’ near-total reliance on systems, networks and applications to conduct business 24 x 7, it’s safe to say that the cost of downtime will continue to increase for the foreseeable future.

Although large enterprises with over one thousand employees may experience the largest actual monetary losses, downtime can be equally devastating to small and mid-sized businesses that typically lack the financial resources of larger firms. A single hour of downtime that occurs during peak usage hours or even a five, 10, 20 or 30 minute outage that disrupts productivity during a critical business transaction, can deal corporations a significant monetary blow, damage their reputation and result in litigation. For SMBs that lack the financial resources of their larger enterprise counterparts, extended downtime could potentially put them out of business.

At the same time, ITIC survey data shows that an 85% majority of corporations now require a minimum offour nines” of uptime 99.99% for mission critical hardware, operating systems and main line of business (LOB) applications. This is the equivalent of 52 minutes per server/per annum or 4.33 minutes per server/per month of unplanned downtime. This in an increase of four (4) percentage points from ITIC’s 2017 – 2018 Reliability survey.

The message is clear: in today’s Digital Age of “always on” interconnected networks, businesses demand near-flawless and uninterrupted connectivity to conduct business operations. When the connection is lost, business ceases. This is unacceptable and expensive to all parties.

High reliability, availability and strong security are all imperative in order to conduct business.

Hourly Downtime Costs Rise: 86% of Firms Say One Hour of Downtime Costs $300,000+; 34% of Companies Say One Hour of Downtime Tops $1Million Read More »

Hourly Downtime Tops $300K for 81% of Firms; 33% of Enterprises Say Downtime Costs >$1M

The cost of downtime continues to increase as do the business risks. An 81% majority of organizations now require a minimum of 99.99% availability. This is the equivalent of 52 minutes of unplanned outages related to downtime for mission critical systems and applications or ,just 4.33 minutes of unplanned monthly outage for servers, applications and networks.                                         

Over 98% of large enterprises with more than 1,000 employees say that on average, a single hour of downtime per year costs their company over $100,000, while an 81% of organizations report that the cost exceeds $300,000. Even more significantly: three in 10 enterprises – 33% – indicate that hourly downtime costs their firms $1 million or more (See Exhibit 1). It’s important to note that these statistics represent the “average” hourly cost of downtime.  In a worst case scenario – if any device or application becomes unavailable for any reason the monetary losses to the organization can reach millions per minute. Devices, applications and networks can become unavailable for myriad reasons. These include: natural and man-made catastrophes; faulty hardware; bugs in the application; security flaws or hacks and human error. Business-related issues, such as a Regulatory Compliance related inspection or litigation, can also force the organization to shutter its operations. For whatever the reason, when the network and its systems are unavailable, productivity grinds to a halt and business ceases.

Highly regulated vertical industries like Banking and Finance, Food, Government, Healthcare, Hospitality, Hotels, Manufacturing, Media and Communications, Retail, Transportation and Utilities must also factor in the potential losses related to litigation as well as civil penalties stemming from organizations’ failure to meet Service Level Agreements (SLAs) or Compliance Regulations. Moreover, for a select three percent of organizations, whose businesses are based on high level data transactions, like banks and stock exchanges, online retail sales or even utility firms, losses may be calculated in millions of dollars per minute. …

Hourly Downtime Tops $300K for 81% of Firms; 33% of Enterprises Say Downtime Costs >$1M Read More »

Cost of Hourly Downtime Soars: 81% of Enterprises Say it Exceeds $300K On Average

The only good downtime is no downtime.

ITIC’s latest survey data finds that 98% of organizations say a single hour of downtime costs over $100,000; 81% of respondents indicated that 60 minutes of downtime costs their business over $300,000. And a record one-third or 33% of enterprises report that one hour of downtime costs their firms $1 million to over $5 million.

For the fourth straight year, ITIC’s independent survey data indicates that the cost of hourly downtime has increased. The average cost of a single hour of unplanned downtime has risen by 25% to 30% rising since 2008 when ITIC first began tracking these figures.

In ITIC’s 2013 – 2014 survey, just three years ago, 95% of respondents indicated that a single hour of downtime cost their company $100,000.  However, just over 50% said the cost exceeded $300,000 and only one in 10 enterprises reported hourly downtime costs their firms $1million or more. In ITIC’s latest poll three-in-10 businesses or 33% of survey respondents said that hourly downtime costs top $1 million or even $5 million.

Keep in mind that these are “average” hourly downtime costs. In certain use case scenarios — such as the financial services industry or stock transactions the downtime costs can conceivably exceed millions per minute. Additionally, an outage that occur in peak usage hours may also cost the business more than the average figures cited here. …

Cost of Hourly Downtime Soars: 81% of Enterprises Say it Exceeds $300K On Average Read More »

One Hour of Downtime Costs > $100K For 95% of Enterprises

Over 95% of large enterprises with more than 1000 employees say that on average, a single hour of downtime per year costs their company over $100,000, over 50% say the cost exceeds $300,000 and one in 10 indicate hourly downtime costs their firms $1 million or more annually.

Moreover, for a select three percent of organizations, whose businesses are based on high level data transactions, like banks and stock exchanges, online retail sales or even utility firms, losses may be calculated in millions of dollars per minute.

Those are the results of ITIC’s 2013-2014 Technology Trends and Deployment Survey, an independent Web-based survey which polled over 600 organizations in May/June 2013. All categories of businesses were represented in the survey respondent pool: 37% were small/midsized (SMB) firms with up to 200 users; 28% came from the small/midsized (SME) enterprise sector with 201 to 1,000 users and 35% were large enterprises with over 1,000 users. …

One Hour of Downtime Costs > $100K For 95% of Enterprises Read More »

Deal You Can’t Refuse: Stratus’ Zero Downtime or $50K back to customers

The most incredible deal of this holiday season — and one that customers will be hard pressed to refuse — is Stratus Technologies’ pledge of Zero downtime for customers or $50,000 cash back.
Here’s how it works: organizations that purchase any standard configuration of Stratus Technologies’ most current ftServer 6300 enterprise-class x86 fault tolerant server equipped with Microsoft Windows Server 2008 and the required service contract, are eligible for $50,000 or product credit if the server hardware, Stratus system software or operating system failures cause unplanned downtime in a production environment within the guarantee period. The guarantee period lasts up to six months following server deployment. Stratus executives vow that there are no hidden clauses or trap doors in the guarantee.
Stratus Technologies, headquartered in Maynard, Ma. has built its reputation on delivering rock-solid reliability of 99.999% uptime. That’s the equivalent of less than one minute of per server downtime in a year! This is an admirable achievement by any standard.
The ftServer 6300 line is Powered by 2.93 GHz X5570 Intel Quad-Core Xeon™ processors, the ftServer 6300 is optimized for large data center multi-tasking applications with high transaction rates, such as credit card authorization processing, high speed ATM networks, and as a powerful engine for database applications and virtualization environments. A typical ftServer 6300 configuration can actually cost less than the value of the payout. The offer is open to customers worldwide, and the program ends Feb. 26, 2010.
Specifically, customers can choose from a custom version of the ftServer 6300 or one of two pre-configured bundled configurations. The ftServer 6300 Power Bundles #1 and #2 are robust, high-end configurations that consist of Microsoft Windows Server operating system, disk drives and supporting peripherals, with a significant package discount compared to individually priced system components. Other server models in the ftServer line are not included in this program.
Stratus Technologies’ decision to quite literally put its money where its mouth is is a bold move and one that the overwhelming majority of vendors would never consider. In fact, ITIC can’t recall any high tech hardware vendor in recent memory, offering these same terms. However, Roy Sanford, Stratus chief marketing officer, said the deal underscores confidence in Stratus Technologies is of its ability to deliver the highest levels — 99.999% uptime — or greater. “The Zero Downtime program is a show of confidence that our products consistently perform at the highest levels of availability. Our guarantee is right out there for all to see, customers and competitors alike.”
Corporate enterprises that are risk averse, those that demand the highest levels of uptime or those that are in a betting mood are well advised to check out the Terms and Conditions of Stratus Technologies offer. You’ve literally got nothing to lose. Stratus Technologies: http://www.stratus.com

Deal You Can’t Refuse: Stratus’ Zero Downtime or $50K back to customers Read More »

Application Availability, Reliability and Downtime: Ignorance is NOT Bliss

Two out of five businesses – 40% – report that their major business applications require higher availability rates than they did two or three years ago. However an overwhelming 81% are unable to quantify the cost of downtime and only a small 5% minority of businesses are willing to spend whatever it takes to guarantee the highest levels of application availability 99.99% and above. Those are the results of the latest ITIC survey which polled C-level executives and IT managers at 300 corporations worldwide.

ITIC partnered with Stratus Technologies in Maynard, Ma. a vendor that specializes in high availability and fault tolerant hardware and software solutions, to compose the Web-based survey. ITIC conducted this blind, non-vendor and non-product specific survey which polled businesses on their application availability requirements, virtualization and the compliance rate of their service level agreements (SLAs). None of the respondents received any remuneration. The Web-based survey consisted of multiple choice and essay questions. ITIC analysts also conducted two dozen first person customer interviews to obtain detailed anecdotal data.

Respondents ranged from SMBs with 100 users to very large enterprises with over 100,000 end users. Industries represented: academic, advertising, aerospace, banking, communications, consumer products, defense, energy, finance, government, healthcare, insurance, IT services, legal, manufacturing, media and entertainment, telecommunications, transportation, and utilities. None of the survey respondents received any remuneration for their participation. The respondents hailed from 15 countries; 85% were based in North America.

Survey Highlights

The survey results uncovered many “disconnects” between the levels of application reliability that corporate enterprises profess to need and the availability rates their systems and applications actually deliver. Additionally, a significant portion of the survey respondents had difficulty defining what constitutes high application availability; do not specifically track downtime and could not quantify or qualify the cost of downtime and its impact on their network operations and business.

Among the other survey highlights:

  • A 54% majority of IT managers and executives surveyed said more than two-thirds of their companies’ applications require the highest level of availability – 99.99% — or four nines of uptime.
  • Over half – 52% of survey respondents said that virtualization technology increases application uptime and availability; only 4% said availability decreased as a result of virtualization deployments.
  • In response to the question, “which aspect of application availability is most important” to the business, 59% of those polled cited the prevention of unplanned downtime as being most crucial; 40% said disaster recovery and business continuity were most important; 38% said that minimizing planned downtime to apply patches and upgrades was their top priority; 16% said the ability to meet SLAs was most important and 40% of the survey respondents said all of the choices were equally crucial to their business needs.
  • Some 41% said they would be satisfied with conventional 99% to 99.9% (the equivalent of two or three nines) availability for their most critical applications. Ninety-nine percent or 99.9% does not qualify as a high-availability or continuous-availability solution.
  • An overwhelming 81% of survey respondents said the number of applications that demand high availability has increased in the past two-to-three years.
  • Of those who said they have been unable to meet service level agreements (SLAs), 72% can’t or don’t keep track of the cost and productivity losses created by downtime.
  • Budgetary constraints are a gating factor prohibiting many organizations from installing software solutions that would improve application availability. Overall, 70% of the survey respondents said they lacked the funds to purchase value-added availability solutions (40%); or were unsure how much or if their companies would spend to guarantee application availability (30%).
  • Of the 30% of businesses that quantified how much their firms would spend on availability solutions, 3% indicated they would spend $2,000 to $4,000; 8% said $4,000 to $5,000; another 3% said $5,000 to $10,000; 11% — mainly large enterprises indicated they were willing to allocate $10,000 to $15,000 to ensure application availability and 5% said they would spend “whatever it takes.”

According to the survey findings, just under half of all businesses – 49% – lack the budget for high availability technology and 40% of the respondents reported they don’t understand what qualifies as high availability. An overwhelming eight out of 10 IT managers – 80% — are unable to quantify the cost of downtime to their C-level executives.

To reiterate, the ITIC survey polled users on the various aspects and impact of application availability and downtime but it did not specify any products or vendors.

The survey results supplemented by ITIC first person interviews with IT managers and C-level executives clearly shows that on a visceral level, businesses are very aware of the need for increased application availability has grown. This is particularly true in light of the emergence of new technologies like application and desktop virtualization, cloud computing, Service Oriented Architecture (SOA). The fast growing remote, mobile and telecommuting end user population utilizes unified communications and collaboration applications and utilities is also spurring the need for greater application availability and reliability.

High Application Availability Not a Reality for 80% of Businesses

The survey results clearly show that network uptime isn’t keeping pace with the need for application availability. At the same time, IT managers and C-level executives interviewed by ITIC did comprehend the business risks associated with downtime, even though most are unable to quantify the cost of downtime or qualify the impact to the corporation, its customers, suppliers and business partners when unplanned application and network outages occur.

“We are continually being asked to do more with less,” said an IT manager at a large enterprise in the Northeast. “We are now at a point, where the number of complex systems requiring expert knowledge has exceeded the headcount needed to maintain them … I am dreading vacation season,” he added.

Another executive at an Application Service provider acknowledged that even though his firm’s SLA guarantees to customers are a modest 98%, it has on occasion, been unable to meet those goals. The executive said his firm compensated one of its clients for a significant outage incident. “We had a half day outage a couple of years ago which cost us in excess of $40,000 in goodwill payouts to a handful of our clients, despite the fact that it was the first outage in five years,” he said.

Another user said a lack of funds prevented his firm from allocating capital expenditure monies to purchase solutions that would guarantee 99.99% application availability. “Our biggest concern is keeping what we have running and available. Change usually costs money, and at the moment our budgets are simply in survival mode,” he said.

Another VP of IT at a New Jersey-based business said that ignorance is not bliss. “If people knew the actual dollar value their applications and customers represent, they’d already have the necessary software availability solutions in place to safeguard applications,” he said. “Yes, it does cost money to purchase application availability solutions, but we’d rather pay now, then wait for something to fail and pay more later,” the VP of IT said.

Overall, the survey results show that the inability of users to put valid metrics and cost formulas in place to track and quantify what uptime means to their organization is woefully inadequate and many corporations are courting disaster.

ITIC advises businesses to track downtime, the actual cost of downtime to the organization and to take the necessary steps to qualify the impact of downtime including lost data, potential liability risks e.g. lost business, lost customers, potential lawsuits and damage to the company’s reputation. Once a company can quantify the amount of downtime associated with its main line of business applications, the impact of downtime and the risk to the business, it can then make an accurate assessment of whether or not its current IT infrastructure adequately supports the degree of application availability the corporation needs to maintain its SLAs.

Application Availability, Reliability and Downtime: Ignorance is NOT Bliss Read More »

ITIC 2023 Reliability Survey IBM Z Results

The IBM z16 mainframe lives up to its reputation for delivering “zero downtime.”

 

The latest z16 server, introduced in April 2022, delivers nine nines—99.9999999%—of uptime and reliability. This is just over 30 milliseconds – 31.56 milliseconds to be precise – of per server annual downtime, according to the results of the ITIC 2023 Global Server Hardware, Server OS Reliability Survey.

ITIC’s 2023 Global Server Hardware, Server OS Reliability independent web-based survey polled nearly 1,900 corporations worldwide across over 30 vertical market segments on the reliability, performance and security of the leading mainstream on-premises and cloud-based servers from January through July 2023. To maintain objectivity, ITIC accepted no vendor sponsorship.

ITIC’s 2023 Global Server Hardware, Server OS Reliability survey also found that an 88% majority of the newest IBM Power10 server (shipping since September 2021) users say their organizations achieved eight nines—99.999999%—of uptime. This is 315 milliseconds of unplanned, per server, per annum outage time due to underlying system flaws or component failures. So, Power10 corporate enterprises spend just $7.18 per server/per year performing remediation due to unplanned server outages that occurred due to inherent flaws in the server hardware or component parts.

The IBM z16 and Power 10 server-specific uptime statistics were obtained by breaking out the results of more than 200 respondent organizations that deployed the z16 since it began shipping in April/May 2022. A 96% majority of these z16 enterprises say their businesses achieved nine
nines—99.9999999%—of server uptime. This is the equivalent of a near-imperceptible 31.56 milliseconds of per server annual downtime due to any inherent flaws in the server hardware and its various components (See Table 1).

An IBM spokesperson says that currently the IBM Z mainframe achieves an average of “eight nines” or 99.999999% reliability overall and that statistic includes the various versions (the z13, z14, z15 and z16) of its mainframe enterprise system. IBM has not yet reviewed ITIC’s independent survey data on the z16 results.

To put these statistics into perspective: The latest z16 corporate enterprises and their IT managers spend mere pennies per server/per year performing remediation activities due to unplanned per server outages that occurred due to inherent system failures.

This is the 15th consecutive year that the IBM Z and IBM Power Systems have dominated with the best across-the-board uptime reliability ratings among 18 mainstream distributions.

Additionally, the z16 customers say their firms experienced a 20% to 30% improvement in overall reliability, performance, response times and critical security metrics versus older iterations of the zSystems platforms.

Previous versions of the IBM Z mainframe—the z13, z14 and z15—always delivered best-in-class reliability. ITIC’s 2023 Global Reliability study found that the aggregate average results from all z13, z14 and z15 customers ranged between seven and eight nines of uptime depending on the version, age, server configurations and specific use cases.

There is an order of magnitude of that distinguishes the “nines” of uptime and reliability. For example, four nines of uptime—which is the current acceptable level of uptime for many mainstream businesses—equals 52.56 minutes of unplanned annual per server downtime. In contrast, five nines of uptime is the equivalent of just 5.26 minutes of unplanned annual per server downtime.

Meanwhile, the fault-tolerant levels of reliability – seven and eight nines, 99.99999% and 99.999999% represent 3.15 seconds and 315 milliseconds, respectively of unplanned per server annual outages due to server or component failures.

 

The z16: A Quantum Leap Forward in Reliability, Performance and Cloud Functionality

 

The IBM Z mainframes have always delivered best-in-class reliability, performance and security. However, the z16 quite literally takes a quantum leap forward by providing advanced capabilities like on-chip AI inferencing and quantum-safe computing.

The IBM z16 and Power10 servers also delivered the strongest server security, experiencing the fewest number of successful data breaches, the least amount of downtime due to security-related incidents and the fastest mean time to detection (MTTD). ITIC’s latest 2023 Global Server Hardware Security Survey found that 97% of IBM z16 enterprises were able to detect, isolate and shut down attempted data breaches immediately to within the first 10 minutes. Additionally, 92% of IBM Power10 customers detected and repelled attempted hacks immediately to within the first 10 minutes. An organization’s ability to identify and thwart security breaches, minimizes downtime, saves money and mitigates risk.

ITIC’s 2023 survey data found that 84% of respondent enterprises cited security issues as the top cause of unplanned downtime. And 67% of respondents cite human error as the cause of unplanned server and application outages. Human error encompasses everything from accidentally disconnecting a server, to misconfiguration issues and incompatibilities among disparate hardware and application and server OS software to failure to properly right size the server to adequately accommodate mission critical workloads.

Overall, the IBM z16 offers near perfect reliability and the most incredibly robust mainstream security available today.

ITIC 2023 Reliability Survey IBM Z Results Read More »

Scroll to Top