ITIC: Home

Posts Tagged ‘reliability’

Over 95% of large enterprises with more than 1000 employees say that on average, a single hour of downtime per year costs their company over $100,000, over 50% say the cost exceeds $300,000 and one in 10 indicate hourly downtime costs their firms $1 million or more annually.

Moreover, for a select three percent of organizations, whose businesses are based on high level data transactions, like banks and stock exchanges, online retail sales or even utility firms, losses may be calculated in millions of dollars per minute.

Those are the results of ITIC’s 2013-2014 Technology Trends and Deployment Survey, an independent Web-based survey which polled over 600 organizations in May/June 2013. All categories of businesses were represented in the survey respondent pool: 37% were small/midsized (SMB) firms with up to 200 users; 28% came from the small/midsized (SME) enterprise sector with 201 to 1,000 users and 35% were large enterprises with over 1,000 users.

[keep reading…]

A 64% majority of organizations now require that their databases deliver a minimum of four, “nines” of uptime 99.99% or better for their most mission critical applications . That is the equivalent of 52 minutes of unplanned downtime per database/per annum or just over one minute of downtime per week as a result of an unplanned outage.

Those are the results of ITIC’s 2013 – 2014 Database Reliability and Deployment Trends Survey, an independent Web-based survey which polled 600 organizations worldwide during May/June 2013. The nearly two-thirds of respondents who indicated they need 99.99% or greater availability is a 10% increase over the 54% who said they required a minimum of four nines reliability in ITIC’s 2011-2012 Database Reliability survey.

This trend will almost certainly continue unabated owing in large part to an increase in mainstream user deployments of databases running Big Data Analytics, Business Intelligence (BI), Customer Relationship Management (CRM) and Enterprise Resource Planning (ERP) applications. These applications are data intensive and closely align with organizations’ main-line-of-business and recurring revenue stream. Hence, any downtime on a physical, virtual or cloud-based DB will likely cause immediate disruptions that will quickly impact the corporation’s bottom line.

[keep reading…]

Big Blue Hardware is Rock Solid

IBM hardware retains its status as being best in class in terms of reliability, stability and performance and customer satisfaction. IBM’s System z mainframes recorded the least amount of downtime of any hardware platform. In the server hardware category systems with relatively small market shares, including Stratus Technologies ftServer 6300 and 4500 series and Fujitsu’s Primequest and Primergy Servers continue to score very high reliability.

Stratus Technologies of Maynard, MA offers Intel Xeon-based systems with mainframe-like fault tolerance and reliability with 99.999 % reliability. The Fujitsu Primergy and Fujitsu SPARC systems similarly deliver a high level of reliability and fault tolerance with 48% of reporting 99.999% or just over five minutes of per server/ per annum downtime due to unplanned outages.

The length and severity of Tier 1, Tier 2 and Tier 3 unplanned outages and the patching actions related to each correspond to specific line item capital expenditure (CAPEX) and operational expenditure (OPEX) costs for the business. Reliability, measured by downtime, can positively or negatively impact TCO and accelerate or delay ROI.

[keep reading…]