survey

IBM, Lenovo, HPE and Huawei Servers Maintain Top Reliability Rankings; Cisco Makes Big Gains IBM, Lenovo hardware up to 24x more reliable; 28x more economical vs. least reliable White box servers

ITIC’s latest 2019 Global Server Hardware, Server OS Reliability Mid-Year Update survey results indicate that mission critical servers from IBM, Lenovo, Hewlett-Packard Enterprise (HPE) and Huawei all maintained their top positions, achieving “four to six nines” of uptime.
These findings come at a time when businesses’ demand for high reliability and continuous, uninterrupted data access is at an all-time high.
ITIC’s latest survey data finds that the most reliable mainstream server platforms – the IBM Power Systems, Lenovo ThinkSystem, Hewlett-Packard Enterprise (HPE) and Huawei KunLun deliver up to 24x more uptime and availability than the least dependable unbranded “White box” servers. Additionally, the superior uptime of the above top ranked mission critical hardware makes them up to 28x more economical and cost effective than the least stable White box servers.
High end mission critical server distributions from IBM, Lenovo, HPE and Huawei each recorded just under or approximately two (2) minutes of per server, per annum unplanned downtime due to inherent flaws in the underlying hardware or component parts (See Exhibit 1). By contrast, the least consistent hardware – unbranded White box servers – averaged 49 minutes of unplanned per server, per annum downtime due to problems or failures with the server or its components (e.g. hard drive, memory, cooling systems etc.).
Server hardware reliability directly impacts ongoing daily business transactions and productivity. There are immediate monetary costs associated with server outages of even a few minutes. The disparity in the annual downtime cost comparisons among the top performing and the least reliable server hardware, is eye-opening.

A single hour of downtime calculated at $100,000 equates to $1,667 per server/per minute.

Corporations that deploy the most highly reliable servers: the IBM Power Systems; Lenovo ThinkSystem; HPE Superdome and Huawei KunLun (in that order) that averaged just under or about two (2) minutes of unplanned per server downtime, potentially could expect to lose approximately $3,000 per server/per minute for an hour of downtime calculated at a very conservative $100,000. By contrast, businesses that deploy the least reliable unbranded White box servers which recorded 49 minutes of unplanned per server annual downtime due to the inherent hardware instability could potentially lose $81,683 based on hourly downtime costs of $100,000. The superior economics of the most reliable versus least reliable servers is even more apparent for businesses that estimate or calculate hourly downtime losses of $300,000; $500,000 or $1,000,000 or higher.

Servers are the bedrock upon which the entire network infrastructure and extended network ecosystem rests. When servers fail, data access is denied. Business stops. Productivity ceases. Revenue suffers.

Some 86% of organizations now require a minimum 99.99% reliability for their firms’ server hardware, operating systems and main line-of-business applications to ensure productivity and deliver uninterrupted data access. High reliability and availability also safeguards the corporation’s daily operations, business processes and revenue stream.

IBM Z, IBM POWER, Lenovo ThinkSystem, HPE Integrity and Huawei KunLun Servers Maintain Highest Uptime Rankings

The latest ITIC 2019 Reliability Mid-Year Update survey polled over 800 corporations from July through early September. The study compared the reliability and availability of over one dozen of the most widely deployed mainstream server platforms and one dozen operating system (OS) distributions. ITIC’s latest study updated a select subset of the survey questions from its annual 2019 Global Server Hardware, Server OS Reliability poll. The poll also tracked the impact of pivotal issues like security, human error, software flaws and aging server hardware on corporate server reliability. To obtain the most accurate and unbiased results, ITIC accepted no vendor sponsorship.

Organizations conduct business 24 x 7 irrespective of time or location, 365 days a year. Corporations continue to expand their operations into the cloud and connect people, applications and devices via the Internet of Things (IoT). Applications like Analytics, AI and Business Intelligence (BI) are complex and compute intensive. They place greater demands on the server hardware. The corporate workforce is increasingly mobile. Users access data from myriad devices. Companies require fast, efficient processing and throughput. It must be secure by design, secure in use, secure in transmission and secure at rest.

To reiterate, all of the high end mission critical servers maintained their top ranked positions from ITIC’s earlier 2019 Global Server Hardware Server OS Reliability Survey published in the first calendar quarter of this year.
The IBM Z mainframe system is in a class of its own, delivering true fault tolerance – “six nines” – 99.9999% uptime to 89% of enterprise users. It delivered imperceptible instances of inherent server failure – 0.74 seconds per/server due to any inherent flaws in the server hardware.

Among the mainstream server distributions, IBM’s Power Systems topped the poll, registering a record low of 1.75 minutes per server downtime followed very closely by the Lenovo Think System servers with 1.88 minutes of per server downtime due to any flaws in the server hardware. Hewlett Packard Enterprise’s (HPE) Superdome X, Huawei’s KunLun FusionServer x86 platforms each recorded 2 minutes of server downtime due to any underlying problems with the server hardware.
Each of these distributions delivered a solid “five nines,” 99.999% of inherent hardware reliability. These leading edge server platforms experienced minimal amounts of unplanned downtime due to flaws in the server hardware or any of its component parts.
ITIC’s 2019 Reliability Mid-Year Update Survey did deliver a few surprises. Cisco Systems’ Unified Computing System (UCS) servers – which are frequently deployed at the network edge – showed a marked improvement in reliability. The Cisco UCS servers reduced per server/per annum downtime by nearly 50% from the 4.1 minutes in ITIC’s prior first quarter reliability survey to 2.3 minutes in the latest poll.
ITIC’s Mid-Year Update survey for the first time also included uptime statistics for Inspur Systems, headquartered in Jinan, China as one of the top five server vendors worldwide in terms of shipments. Inspur server offerings scored in the middle range of hardware platforms with 9.1 minutes of unplanned downtime.

Metrics of three, four and five nines of uptime – 99.9%, 99.99% and 99.999%, – equate to 8.76 hours; 4.38 hours, 52.56 and 5.26 minutes of per server/per annum downtime, respectively.

IBM, Lenovo, HPE and Huawei Servers Maintain Top Reliability Rankings; Cisco Makes Big Gains IBM, Lenovo hardware up to 24x more reliable; 28x more economical vs. least reliable White box servers Read More »

Hourly Downtime Costs Rise: 86% of Firms Say One Hour of Downtime Costs $300,000+; 34% of Companies Say One Hour of Downtime Tops $1Million

Hourly downtime costs continue to increase for all businesses irrespective of size or vertical market. This trend has been evident over the last five to seven years. ITIC’s latest 2019 Global Server Hardware, Server OS Reliability Survey, which polled over 1,000 businesses worldwide from November 2018 through January 2019, found that a single hour of downtime now costs 98% of firms at least $100,000. And 86% of businesses say that the cost for one hour of downtime is $300,000 or higher; this is up from 76% in 2014 and 81% of respondents in 2018 who said that their company’s hourly downtime losses topped $300,000. Additionally, ITIC’s latest 2019 study indicates that one-in-three organizations – 34% – say the cost of a single hour of downtime can reach $1 million to over $5 million. These statistics are exclusive of any litigation, fines or civil or criminal penalties that may subsequently arise due to lawsuits or regulatory non-compliance issues.

Given organizations’ near-total reliance on systems, networks and applications to conduct business 24 x 7, it’s safe to say that the cost of downtime will continue to increase for the foreseeable future.

Although large enterprises with over one thousand employees may experience the largest actual monetary losses, downtime can be equally devastating to small and mid-sized businesses that typically lack the financial resources of larger firms. A single hour of downtime that occurs during peak usage hours or even a five, 10, 20 or 30 minute outage that disrupts productivity during a critical business transaction, can deal corporations a significant monetary blow, damage their reputation and result in litigation. For SMBs that lack the financial resources of their larger enterprise counterparts, extended downtime could potentially put them out of business.

At the same time, ITIC survey data shows that an 85% majority of corporations now require a minimum offour nines” of uptime 99.99% for mission critical hardware, operating systems and main line of business (LOB) applications. This is the equivalent of 52 minutes per server/per annum or 4.33 minutes per server/per month of unplanned downtime. This in an increase of four (4) percentage points from ITIC’s 2017 – 2018 Reliability survey.

The message is clear: in today’s Digital Age of “always on” interconnected networks, businesses demand near-flawless and uninterrupted connectivity to conduct business operations. When the connection is lost, business ceases. This is unacceptable and expensive to all parties.

High reliability, availability and strong security are all imperative in order to conduct business.

Hourly Downtime Costs Rise: 86% of Firms Say One Hour of Downtime Costs $300,000+; 34% of Companies Say One Hour of Downtime Tops $1Million Read More »

ITIC 2020 Editorial Calendar

March/April 2020: ITIC 2020 Global Server Hardware and Server OS Reliability Survey

Description: Reliability and uptime are absolutely essential. Over 80% of corporations now require a minimum of 99.99% availability and greater; and an increasing number of enterprises now demand five nines – 99.999% or higher reliability. But which platforms actually deliver? This survey polls businesses on the reliability, uptime and management issues involving the inherent reliability of 14 different server hardware platforms and server operating system. The survey polls corporations on the frequency, the duration and reasons associated with Tier 1, Tier 2 and Tier 3 outages that occur on their core server OS and server hardware platforms. The results of this independent, non-vendor sponsored survey will provide businesses with the information they need to determine the TCO and ROI of their individual environments. The survey will also enable the server OS and server hardware vendors to see how their products rate among global users ranging from SMBs with as few as 25 people to the largest global enterprises with 100,000+ end users.

The 2020 ITIC Global Reliability Survey has also been updated and expanded to include questions on:

  • Component level failure data comparisons between IBM Power Servers and Intel-based x86 servers such as Dell, HP, Huawei, Lenovo and Cisco.
  • Percentage of component level failure data comparisons by vendor according to age (e.g. new to three months; three to six months; six months to 1 year; 1 to 2 years; 2 to 3 years; 3 to 4 years; 4 to 5 years; over five years).
  • Which component parts fail and frequency of failure
  • A percentage breakout of server parts failures for parts such as hard disk drives(HDD), processors, memory, power components, fans, or other
  • Where available, how the component failed. For example: memory multi-bit errors, HDD read failures, processor L1/L2 cache errors, etc.

 

April/May: 2020 Hourly Cost of Downtime

 Description: Downtime impacts every aspect of the business. It can disrupt operations and end user productivity, result in data losses and raise the risk of litigation. Downtime can also result in lost business and irreparably damage a company’s reputation. The cost of downtime continues to increase as do the business risks. ITIC’s 2019 Hourly Cost of Downtime survey found an 85 % majority of organizations now require a minimum of 99.99% availability. This is the equivalent of 52 minutes of unplanned outages related to downtime for mission critical systems and applications or just 4.33 minutes of unplanned monthly outage for servers, applications and networks. This survey will once again poll corporations on how much one hour of downtime costs their business – exclusive of litigation, civil or criminal penalties. ITIC will also interview customers and vendors across 10 key vertical markets including: Banking/Finance; Education; Government; Healthcare; Manufacturing; Retail; Transportation and Utilities. The Report will focus on the toll that downtime extracts on the business, its IT departments, its employees, its business partners, suppliers and its external customers. This report will also examine the remediation efforts involved in resuming full operations as well as the lingering or after-effects to the corporation’s reputation as the result of an unplanned outage.

 

May/June 2020: ITIC Sexual Harassment, Gender Bias and Pay Equity Survey

 Description:  ITIC’s “Sexual Harassment, Gender Bias and Pay Equity Gap,” independent Web survey polled 1,500 women professionals worldwide across 47 different industries, with a special emphasis on STEM disciplines. The survey focuses on three key areas of workplace discrimination: Sexual Harassment, Gender Bias and Unequal Pay.

 

 

July/August: 2020 IoT Deployment and Usage Trends Survey and Report

 

Description: The Internet of Things (IoT) has been one of the hottest emerging technologies of the last several years. This ITIC Report will present the findings of an ITIC survey that polls corporations on the business and technical challenges as well as the costs associated with IoT deployments. This IoT Report will also examine the ever present security risks associated with interconnected environments and ecosystems. ITIC’s IoT 2020 Deployment and Usage Trends Survey will also query global businesses on a variety of crucial issues related to their current and planned Internet of Things (IoT) usage and deployments such as how  they are using IoT (e.g. on-premises versus Network Edge/Perimeter deployments); the chief benefits and biggest challenges and impediments to IoT upgrades.  Vendors profiled for this report will include: AT&T, Bosch, Cisco, Dell, Fujitsu, General Electric (GE), Google, Hitachi, Huawei, IBM, Intel, Microsoft, Particle, PTC, Qualcomm,  Samsung, SAP, Siemens and Verizon.

 August: ITIC 2020-2021 Security Trends

 Description: Security, security, security! Security impacts every aspect of computing and networking operations in the Digital Age. And it’s never been more crucial as businesses, schools, government workers and consumers are working at home amidst the ongoing Nouvel and damaging security hack impacting the lives of millions of consumers and corporations. This Report will utilize the latest ITIC independent survey data to provide an overview of the latest trends in computer security including the latest and most dangerous hacks and what corporations can do to defend their data assets. Among the topics covered:

 

  • Security threats in the age of COVID-19
  • The most prevalent type of security hacks
  • The percentage of corporations that experienced a security hack
  • The duration of the security hack
  • The severity of the security hack
  • The cost of the security hack
  • Monetary losses experienced due to security breaches
  • Lost, damaged, destroyed or stolen data due to a security breach
  • The percentage of time that corporations spend securing their networks and data assets
  • Specific security policies and procedures companies are implementing
  • The issues that pose the biggest threats/risks to corporate security

 

August/September: ITIC 2020 Global Server Hardware Server OS Reliability Survey Mid-Year Update

Description: This Report is the Mid-year update of ITIC’s Annual Global Server Hardware, Server OS Reliability Survey. Each year ITIC conducts a second survey of selected questions from its Annual Reliability poll. ITIC also conducts new interviews with C-level executives and Network administrators to get detailed insights on the reliability of their server hardware and operating system software as well as the technical service and support they receive from their respective vendors.  ITIC will also incorporate updated PowerPoint slides and statistics to accompany the report.

 

October/November: AI, Machine Learning and Data Analytics Market Outlook

Description: This Report will examine the pivotal role that AI, Machine Learning and IoT-enabled predictive and prescriptive Analytics plays in assisting businesses sort through the data deluge to make informed decisions and derive real business value from their applications. AI and Machine Learning take Data Analytics to new levels. They can help businesses identify new product opportunities and also uncover hidden risks. Machine intelligence is already built into predictive and prescriptive analytics tools, speeding insights and enabling the analysis of vast probabilities to determine an optimal course of action or the best set of options. Over time, more sophisticated forms of AI will find their way into analytics systems, further improving the speed and accuracy of decision-making. Rather than querying a system and waiting for a response, the trend has been toward interactivity using visual interfaces. In the near future, voice interfaces will become more common, enabling humans to carry on interactive conversations with digital assistants while watching the analytical results on a screen. Analytics makes businesses more efficient; it enables them to cut costs and lower ongoing operational expenditures. It also helps them respond more quickly and agilely to changing market conditions – making them more competitive and thus driving top line revenue in both the near term and long term strategic sales. Vendors Profiled: AppDynamics, BMC, Cisco, IBM, Microsoft, Oracle, SAP and SAS. It also discusses how non-traditional vendors in the carrier and networking segments e.g. Dell/EMC, GE, Google, Verizon and Vodafone have fully embraced AIOps and analytics via partnerships, acquisitions and Research and Development (R&D) initiatives and have moved into this space and challenged the traditional market leaders. And it will provide an overview of the latest Mergers and Acquisitions (M&A) and their impact on the Analytics industry.

 December: ITIC 2021 Technology and Business Outlook

 Description: This Report will be based on ITIC survey results that poll IT administrators and C-level executives on a variety of forward looking business and technology issues for the 2020 timeframe. Topics covered will include: Security, IT staffing and budgets; application and network infrastructure upgrades; hardware and software purchasing trends and cloud computing.

Survey Methodology

 

ITIC conducts independent Web-based surveys that contain multiple choice and essay questions. In order to ensure the highest degree of accuracy, we employ authentication and tracking mechanisms to prohibit tampering with the survey results and to prohibit multiple votes by the same party. ITIC conducts surveys with corporate enterprises in North America and in over 25 countries worldwide across a wide range of vertical markets. Respondents range from SMBs with 25 to 100 workers to the largest multinational enterprises with over 100,000 employees. Each Report also includes two dozen first person customer interviews and where applicable, vendor and reseller interviews. The titles of the survey respondents include:

 

  • Network administrators
  • VPs of IT
  • Chief information officers (CIOs)
  • Chief technology officers (CTOs)
  • Chief executive officers (CEOs)
  • Chief Information Security Officers (CISOs)
  • Chief Marketing Officers (CMOs)
  • Consultants
  • Application developers
  • Database Administrators
  • Telecom Manager
  • Software Developer
  • System Administrator
  • IT Architect
  • Physical Plant Facilities Manager
  • Operations Manager
  • Technical Lead
  • Cloud Managers/Specialists
  • IoT Manager
  • Server Hardware/Virtualization Manager

 

 

ITIC welcomes input and suggestion from its vendor and enterprise clients with respect to surveys, survey questions and topics for its Editorial Calendar. If there are any particular topics or questions in a specific survey that you’d like to see covered, please let us know and we will do our best to address it.

 

 

About Information Technology Intelligence Corporation (ITIC)

 

ITIC, founded in 2002, is a research and consulting firm based in suburban Boston. It provides primary research on a wide variety of technology topics for vendors and enterprises. ITIC’s mission is to provide its clients with tactical, practical and actionable advice and to help clients make sense of the technology and business events that influence and impact their infrastructures and IT budgets. ITIC can provide your firm with accurate, objective research on a wide variety of technology topics within the network infrastructure: application software, server hardware, networking, virtualization, cloud computing, Internet of Things (IoT) and Security (e.g. ransom ware, cyber heists, phishing scams, botnets etc.). ITIC also addresses the business issues that impact the various technologies and influence the corporate business purchasing decisions. These include topics such as licensing and contract negotiation; GDPR; Intellectual Property (IP); patents, outsourcing, third party technical support and upgrade/migration planning.

 

To purchase or license ITIC Reports and Survey data contact: Fred Abbott

Email: fhabbott@valleyviewventures.com;

Valley View Ventures, Inc.

Phone: 978-254-1639

www.valleyviewventures.com

ITIC 2020 Editorial Calendar Read More »

KnowBe4 Survey: 64% of Corporate Users Say Security Awareness Training Stops Hacks

A new security survey finds that two-thirds of corporate users – 64% — assert that proactive Security Awareness Training helps their businesses to identify and thwart hacks immediately upon deployment. And, an 86% majority of corporations say Security Awareness Training (SAT) decreased overall security risks and educated employees to the ever-present danger posed by cyber security scams.

Those are the findings of the KnowBe4 “2018 Security Awareness Training Deployment and Trends Survey.”  This annual, independent Web-based survey polled 1,100 organizations worldwide during August and September 2018. The independent study conducted by KnowBe4, a Tampa, Florida-based maker of security training and phishing tools, queried corporations on the leading security threats and challenges facing their firms as cyber security attacks increase and intensify.

ITIC partnered with KnowBe4 on this study which also polled businesses on the initiatives they’re taking to more proactively combat the growing diversified and targeted cyber threats. The survey found that 88% of respondents currently deploy (SAT) tools. The businesses report that the training plays a pivotal role in identifying and thwarting attacks; minimizing risk and positively changing the employee culture.

Among the other top survey findings:

  • Social Engineering was the top cause of attacks, cited by 77% of respondents, followed by Malware (44%); User Error (27%) and a combination of the above (19%) and Password attacks (17%). (See Exhibit 1).
  • Some 84% of respondents said their businesses could quantify the decrease in successful Social Engineering attacks (e.g. Phishing scams, malware, Zero Day etc.) after deploying SAT to their end users after just a few simulated exercises. This is based on 700 anecdotal responses obtained from the Essay comments and first person interviews.
  • On average, respondents reported that Social Engineering cyber hacks like Phishing scams and Malware declined significantly from a success rate of 40% to 50% to zero to five percent after firms participated in several KnowBe4 SAT sessions.
  • Almost three-quarters – 71% of survey participants – indicate their businesses proactively conduct simulated Phishing attacks on a monthly, quarterly or weekly basis.
  • An overwhelming 96% of respondents affirmed that deploying SAT changed their firm’s computer security culture for the better, making everyone from C-level executives to knowledge workers more cognizant of cyber threats.

Introduction

In the 21st century Digital Age corporations can no longer practice security with 20/20 hindsight.

Complacency and ignorance regarding the security of the corporation’s data assets will almost certainly lead to disaster. Not a day goes by without a major new cyber hack reported.

Threats are everywhere. And no organization is immune.

Hackers are sophisticated, bold and hone in on specific targets. The hacks themselves are more prolific, pervasive and pernicious.

The current computing landscape includes virtualization, private, public and hybrid cloud computing, Machine Learning and the Internet of Things (IoT). These technologies are designed to facilitate faster, more efficient communication and better economies of scale by interconnecting machines, devices, applications and people.

The downside: increasing inter-connectivity among devices, applications and people produces a “target rich environment.”  Simply put, there are many more vulnerabilities and potential entry points into the corporate network. IT and security administrators have many more things to manage and they can’t possibly have eyes on everything. Oftentimes, the company’s end users pose the biggest security threat by unknowingly clicking on bad links. But even so-called “trusted” sources like supposedly secure third party service providers, business partners or even internal company executives can unwittingly be the weak links that enable surreptitious entry into the corporate networks.

The ubiquitous nature and myriad types of threats, further heightens security risks and significantly raises the danger that every organization – irrespective of size or vertical market – will be a target. The accelerated pace of new Cyber security heists via Social Engineering, (e.g. Phishing scams, malware, Password attacks, Zero Day, etc.), makes the IT Security administrator’s job extremely daunting.

Fortunately, there is help in the form of Security Awareness Training which immediately assists organizations in educating employees from the C-suite to the Mail room and transforming the corporate culture from one that is lax, to one that is alert and vigilant.

Data & Analysis

Computer and network security has all too often been practiced with 20/20 hindsight. That is, organizations have been lax in implementing and enforcing strong Computer Security Policies.

The KnowBe4 2018 Security Awareness Training Deployment and Trends Survey results indicate a majority of companies recognize the increasing danger posed by myriad pervasive and pernicious cyber threats. Businesses are also acutely aware that Security and IT managers and administrators cannot possibly have “eyes on everything,” as the size, scope and complexity of their respective infrastructures increases along with the number of interconnected people, devices, applications and systems.  Hence, companies are now proactively assuming responsibility for safeguarding their data.

SAT is a cost effective and expeditious mechanism for heightening user awareness — from the C-Suite to the average worker – of the multiple security threats facing organizations.

Among the other survey highlights:

  • Among businesses victimized by Social Engineering, some 70% of respondents cited Email as the root cause. This is mainly due to end users clicking without thinking and falling prey to a wide range of scams such as Phishing, malware and Zero Day hacks. Another 15% of respondents said they were “Unsure” which is extremely concerning.
  • An 88% majority of respondents currently employ Security Awareness Training Programs and six percent plan to install one within six months.
  • An 86% majority of Security Awareness Training Programs conduct simulated Phishing attacks and that same percentage – 86% – firms randomize their simulated Phishing attacks.
  • Some 71% of respondents that deploy KnowBe4’s Security Awareness Training said their firms had not been hacked in the last 12 months vs. 29% that said their companies were successfully penetrated (even for a short while before being detected and removed).
  • Survey respondents apply Security Awareness Training programs in a comprehensive manner to ensure the best possible outcomes. Asked to “select all” the mechanisms they use in their SAT programs: 74% said they use Email; 71% employ videos, 43% of businesses said they use Human Trainers; 36% send out Newsletters and 27% engage in seminars/Webinars with third parties.

Overall,  the results of the Web-based survey coupled with over two dozen first person interviews conducted by KnowBe4 and ITIC found that Security Awareness Training yields positive outcomes and delivers near immediate Return on Investment (ROI). Approximately two-thirds of the respondents indicated that the training helped their companies to identify and thwart security hacks within the last six months. The participants said security awareness training helped to alert their firms to a potential vulnerability  and allowed them to block the threat. And it also enabled security and IT administrators and users to recognize rogue code and quickly remove it before it could cause damage. Another 20% of those polled claimed their firms had not experienced any hacks in the last six months.

All in all, in this day and age of heightened security and cyber threats, organizations are well advised to proactively safeguard their organizations by implementing Security Awareness Training for their administrators and end users to defend their data assets. For more information, go to: www.knowbe4.com.

 

 

KnowBe4 Survey: 64% of Corporate Users Say Security Awareness Training Stops Hacks Read More »

IBM Bets Big on Cloud, Buys Red Hat for $34B

IBM will acquire open source software and cloud services company Red Hat in a $34B all-cash deal – approximately $190 per share – executives for both firms announced during a joint Monday morning Webcast.

Once the acquisition is complete sometime in the latter half of 2019,Red Hat will become a standalone business unit within IBM’s Hybrid Cloud Team, both companies said in a joint press release. This will preserve the “independence and neutrality” of Red Hat’s open source development heritage and commitment, current product portfolio and go-to-market strategy, and unique development culture. Red Hat will continue to be led by current CEO and president Jim Whitehurst and its current management team. Whitehurst will join IBM’s senior management team and report to IBM chairman, president and chief executive Virginia “Ginni”Rometty. IBM executives said during the Webcast that it intends to maintain Red Hat’s current Research Triangle Park, N.C. headquarters, facilities, brands and practices.

Rometty heralded the Red Hat acquisition as a “game changer” and said it’s all about “resetting the cloud landscape.” IBM’s $34B purchase of Red Hat will be the biggest acquisition in the company’s 107-year history and the price tag equals one-third of IBM’s $105.38B total market valuation.

Rometty clearly feels Red Hat is worth the investment. On Monday’s Webcast she stated that the deal will make “IBM and Red Hat the undisputed Number One leader in hybrid cloud. Our IBM cloud platform is growing like crazy,” Rometty said, adding that “Hybrid cloud is an emerging $1 trillion market.”

The acquisition has been approved by the boards of directors of both IBM and Red Hat. It is subject to Red Hat shareholder approval. It also is subject to regulatory approvals and other customary closing conditions. Meanwhile, IBM intends to suspend its share repurchase program in 2020 and 2021.At signing, IBM has ample cash, credit and bridge lines to secure the transaction financing. The company intends to close the transaction through a combination of cash and debt.

During the Webcast, Rometty made the case for growth in the hybrid cloud market segment claiming that “most companies today are only 20 percent along” their cloud journey, renting compute power to cut costs. The next 80 percent is about unlocking real business value and driving growth. “This is the next chapter of the cloud. It requires shifting business applications to hybrid cloud, extracting more data and optimizing every part of the business, from supply chains to sales,” Rometty said.

Red Hat’s Whitehurst was equally enthusiastic about the forthcoming IBM acquisition. “Joining forces with IBM will provide us with a greater level of scale, resources and capabilities to accelerate the impact of open source as the basis for digital transformation and bring Red Hat to an even wider audience – all while preserving our unique culture and unwavering commitment to open source innovation.”

Throughout the webcast, IBM Senior Vice President of Hybrid Cloud Arvind Krishna and Red Hat Executive Vice President and President of Products and Technologies Paul Cormier emphasized that it would be business as usual with both IBM and Red Hat continuing to honor existing business commitments and partnerships with other firms.

The executives said all of Red Hat’s existing partnerships with other cloud providers including those with major cloud providers such as Amazon Web Services, Microsoft Azure, Google Cloud, Alibaba and more, in addition to the IBM Cloud will remain in place. At the same time, Red Hat will benefit from IBM’s hybrid cloud and enterprise IT scale in helping expand its open source technology portfolio to businesses globally.Red Hat will also continue its open source development projects such as Red Hat Enterprise Linux (RHEL), the OpenShift implementation of Kubernetes-based containers, and the OpenStack cloud computing platform. Similarly, Krishna said, IBM would continue its partnerships with other Linux distributions.

“IBM is committed to being an authentic multi-cloud provider, and we will prioritize the use of Red Hat technology across multiple clouds,” said Arvind Krishna, Senior Vice President, IBM Hybrid Cloud. “In doing so, IBM will support open source technology wherever it runs, allowing it to scale significantly within commercial settings around the world.”

Analysis

The synergies between IBM and Red Hat are obvious.

It’s very apparent the appeal that Red Hat holds for IBM and vice versa.

The two firms are starting from a strong, solid foundation. They’ve been doing business for over two decades. In recent years, Red Hat has expanded its Red Hat Enterprise Linux (RHEL) operating system distribution and services to run on IBM’s POWER servers and z System mainframes. It’s an alliance that has served both firms well.

“Red Hat is not an open source company. We’re an enterprise software company with an open source development model. Our secret sauce is putting those two things together,” Red Hat’s Cormier noted on Monday’s Webcast. “IBM,” he added, “also has a long history of enterprise-grade software and open source development. So, the two companies have a lot in common.”

IBM now wants to capitalize on that commonality in a very big way. It’s no secret that Big Blue’s cloud growth has lagged behind behemoths like Amazon, Google and Microsoft. A 2018 State of the Cloud Report by Rightscale, a cloud management firm, which surveyed 1,000 users, rated IBM as a number four cloud service provider behind Amazon, Microsoft and Google. The Rightscale study also showed that IBM cloud deployment was occurring at a slower pace than the other three market leaders. The Red Hat purchase could serve to accelerate IBM’s cloud deployments and close the gap between IBM, Amazon, Microsoft and Google.

Red Hat helps IBM to grow its cloud business on all fronts: private, public and hybrid clouds since Red Hat built its model on open source and open standards and a very active open source developer community. This stands in stark contrast to the proprietary offerings of Microsoft, Amazon, Google, Oracle and other players.Both IBM and Red Hat can leverage their core strengths in Linux, Kurbernetes, cloud management and service and support. Additionally, Red Hat will have access to IBM’s strong, deep ties to the channel which should enable it to close enterprise deals worldwide and give Red Hat’s product portfolio much greater exposure.

Another plus is IBM’s proven track record with open source. IBM has made numerous royalty-free patent contributions to the Open Invention Network to support development of the Linux platform as well as contributions to Java and the Eclipse development platform, so all of this should stand it in good stead as it moves to embrace and expand its hybrid cloud initiatives.

IBM and Red Hat By the Numbers: Betting Big on the Cloud

The biggest question from investors and analysts following the merger announcement: is whether Red Hat, a company with approximately one-fourth IBM’s valuation is worth the $34B purchase price?

Based on IBM’s perspective of gaining a competitive cloud advantage the answer is a resounding “Yes.”  

Consider that just 18 months ago, Red Hat CEO Whitehurst revealed in a quarterly analyst call that the firm’s biggest deal worth over $20M, came from Linux. But in the last year Red Hat’s top two dozen deals totaling $5M or more were attributable to its OpenShift offering. The OpenShift Container Platform (formerly known as OpenShift Enterprise) is Red Hat’s on-premises private platform as a service product, built around a core of application containers powered by Docker, with orchestration and management provided by Kubernetes, on a foundation of Red Hat Enterprise Linux.  IBM hopes that the combination of its own and Red Hat cloud open source offerings and services sold through its worldwide channel will enable it to expand its presence among enterprises seeking to move their datacenters to the cloud.

Ironically, in the immediate aftermath of the announcement IBM’s stock price declined by 3.54 percent and was trading at $115.40 at Tuesday’s market close, while Red Hat’s stock rose slightly to $170 at Tuesday’s market close. Now, a week later, IBM’s stock price rebounded to $120, but it is still trading well below its 52-week high of $171. Red Hat’s stock meanwhile, continues to climb and gained another three dollars closing at $173.31 after the bell on November 5.

As Exhibits 1 and 2 below illustrate, IBM and Red Hat’s financials each face challenges going forward – specifically in terms of jump starting quarterly revenue and income growth. IBM is also facing pressure to increase its stock price which is now trading at the lower end of its 52-week low of 114.

 

Exhibit 1. IBM by the Numbers

IBM Financials, R&D Spending and Patents 2017 – 2018
Current Stock Price as of 11/5/2018
$120.06 (US)
Market Capitalization $109.11 Billion
Profit Margin 7.12%
Operating Margin 15.14%
Return on Assets 6.24%
Return on Equity 28.82%
Revenue $80.37B
Quarterly Revenue Growth -2.10%
Net Income $5.72B
Quarterly Earnings Growth -1.20%
Total Cash $14.49B
Total Debt $46.92B
Total Global Workforce 380,300
Research & Development Spending $5.6B
 

Number of Patents

9,043 patents awarded in 2017 nearly half

in AI, cloud, blockchain, quantum & security.

Nearly 780,000 total Patents

Source: ITIC

Exhibit 2. Red Hat by the Numbers

Red Hat Financials, R&D Spending and Patents 2017 – 2018
Current Stock Price as of 11/5/2018
$173.31 (US)
Market Capitalization $30.6 Billion
Profit Margin 9.08%
Operating Margin 15.73%
Return on Assets 6.57%
Return on Equity 21.30%
Revenue $3.16B
Quarterly Revenue Growth 13.70%
Net Income $286.44M
Quarterly Earnings Growth -10.50%
Total Cash $1.77B
Total Debt $516.53M
Total Global Workforce 12,600
Research & Development Spending $578.33M
Number of Patents >2,000 since 2002 but does not enforce if used in properly licensed open sourced software

Source: ITIC 

Skepticism: Will Other Suitors Emerge?

As with any merger or acquisition, there’s always the potential that a deal will get called off or that other suitors will emerge.

Several Wall Street analysts suggested that high technology rivals might decide to play the role of spoiler and top IBM’s bid of $190 per share for Red Hat. Some of the names being mentioned as possible suitors were: Cisco Systems, Inc., Google and Oracle Corp.

On Monday, Cowen analyst Gregg Moskowitz, was one of those Wall Street analysts who opined that other bidders may crop up. “The substantial premium that IBM is paying for Red Hat might on the surface seem to make it highly unlikely that a superior bid could occur,” Moskowitz said. “However, we believe there is a reasonable possibility that another suitor could emerge.” Moskowitz said if a breakup fee was not overly onerous, Cisco might be a likely contender to lure Red Hat away.

Brad Reback, a Senior Equity Research Analyst at Stifel Nicolaus & Company, Inc. said in a research note that he would “not be surprised if hyperscale cloud vendorslike Google, Amazon, Microsoft, or Oracle make a competing bid given Red Hat’s strategic position within on-premises datacenters (over 100K customers).”

Microsoft, however, might be a longshot since it recently announced its own open source initiative with its $7.5B acquisition of GitHub.

Michael Turits, Managing Director Equity Research Infrastructure at Raymond James & Associates, says a bidding war may occur in the near future and says IBM’s bid for Red Hat could set off a buying frenzy for software firms.

Turits said a stronger IBM cloud portfolio poses a threat to several of its rivals, including Microsoft and Oracle.

Conclusion

IBM has made a bold move to strengthen its position in hybrid clouds and close the gap between itself and Amazon, Google and Microsoft. Purchasing Red Hat also brings IBM more closely back to its core strengths in software, open source and services. The Red Hat Linux distribution should also serve to further solidify IBM’s already strong POWER and z Systems server hardware offerings.

What is not clear is how the merged entity will treat or de-emphasize its relationships/partnerships with other cloud vendors once the Red Hat acquisition is complete. Regardless of what IBM and Red Hat say now, changes are bound to occur in those relationships.

The more immediate issue is whether or not any other firms will decide to up the ante and start a bidding war for Red Hat. That could make things very interesting. For right now though, IBM has served notice that it will put its money and its marketing muscle behind its cloud ambitions.

IBM Bets Big on Cloud, Buys Red Hat for $34B Read More »

ITIC Poll: Human Error and Security are Top Issues Negatively Impacting Reliability

Multiple issues contribute to the high reliability ratings among the various server hardware distributions.  ITIC’s 2018 Global Server Hardware, Server OS Reliability Mid-Year Update reveals that three issues in particular stand out as positively or negatively impacting reliability. They are: Human Error, Security and increased workloads.

ITIC’s 2018 Global Server Hardware, Server OS Reliability Mid Year Update polled over 800 customers worldwide from April through mid-July 2018. In order to obtain the most objective and unbiased results, ITIC accepted no vendor sponsorship for the Web-based survey.

Human Error and Security Are Biggest Reliability Threats

ITIC’s latest 2018 Reliability Mid Year update poll also chronicled the strain that external issues placed on organizations and their IT departments to ensure that the servers and operating systems deliver a high degree of reliability and availability.  As Exhibit 1 illustrates, human error and security (both from internal and external hacks) continue to rank as the chief culprits that cause unplanned downtime among servers, operating systems and applications for the fourth straight year.  After that, there is a drop off of 22 to 30 percentage points for the remaining issues ranked in the top five downtime causes. Both human error and reliability have had the dubious distinction of maintaining the top two factors precipitating unplanned downtime in the past five ITIC reliability polls.

Analysis

Reliability is a two-way street in which server hardware, OS and application vendors as well as corporate users both bear responsibility for the reliability of their systems and networks.

On the vendor side, there are obvious reasons why hardware makers like HPE, IBM and Lenovo mission critical servers consistently gain top reliability ratings. As ITIC noted in Part 1 of its reliability survey findings, the reliability gap between high end systems and inexpensive, commodity servers with basic features continue to grow. They include:

  • Research and Development (R&D) Vendors like Cisco, HPE, Huawei, IBM and Lenovo have made an ongoing commitment to research and development (R&D) and continually refresh/update their solutions.
  • RAS 2.0.The higher end servers incorporate the latest Reliability, Accessibility and Serviceability (RAS) 2.0 features/functions and are fine-tuned for manageability and security.
  • Price is not the top consideration. Businesses that purchase higher end mission critical and x86 systems like Fujitsu’s Primergy, HPE’s Integrity, Huawei’s KunLun, IBM Z and Power Systems and Lenovo System x want a best in class product offering, first and foremost. These corporations in verticals like banking/finance, government, healthcare, manufacturing, retail and utilities are more motivated with the historical ability of the vendor to act as a true responsive “partner” delivering a highly robust, leading edge hardware. They also want top-notch after market technical service and support, quick response to problems and fast, efficient access to patches and fixes.
  • More experienced IT Managers. In general, IT Managers, application developers, systems engineers and security professionals at corporations which purchase higher end servers from IBM, HPE, Lenovo, and Huawei tend to have more experience. The survey found that organizations that buy mission critical servers have IT and technical staff that boast approximately 12 to 13 years experience. By contrast, the average experience among IT managers and systems engineers at companies that purchase less expensive commodity based servers is about six years.

Highly experienced IT managers are more likely to spot problems before they become a major issue and lead to downtime and in the event of an outage. They are also more likely to perform faster remediation, accelerating the time it takes to identify the problem and get the servers and applications up and running faster than less experienced peers.

In an era of increasingly connected servers, systems, applications, networks and people, there are myriad factors that can potentially undercut reliability; they are:

  • Human Error and Security. To reiterate, these two factors constitute the top threats to reliability. ITIC does not anticipate this changing in the foreseeable future. Some 59% of respondents cited Human Error as their number one issue, followed by 51% that said Security problems caused downtime. And nearly two-thirds — 62% — of businesses indicated that their Security and IT administrators grapple with a near constant deluge of more pervasive and pernicious security threats. If the availability, reliability and access to servers, operating systems and mission critical main LOB applications is compromised or denied, end user productivity and business operations suffer immediate consequences.
  • Heavier, more data intensive workloads. The latest ITIC survey data finds that workloads have increased by 14% to 39% over the past 18 months.
  • A 60% majority of respondents say increased workloads negatively impact reliability; up 15% percentage points since 2017. Of that 60%, approximately 80% of firms experiencing reliability declines have commodity servers: e.g., White box; older Dell, HPE ProLiant and Oracle hardware >3 ½ years old that haven’t been retrofitted/upgraded.
  • Provisioning complex new applications that must integrate and interoperate with legacy systems and applications. Some 40% of survey respondents rate application deployment and provisioning as among their biggest challenges and one that can negatively impact reliability.
  • IT Departments Spending More Time Applying Patches. Some 54% of those polled indicated they are spending upwards of one hour to over four hours applying patches – especially security patches. Users said the security patches are large, time consuming and often complex, necessitating that they test and apply them manually. The percentage of firms automatically applying patches commensurately decreased from 30% in 2016 to just 9% in the latest 2018 poll. Overall, the latest ITIC survey shows that as of July 2018 companies are applying 27% more patches now than any time since 2015.
  • Deploying new technologies like Artificial Intelligence (AI), Big Data Analytics which require special expertise by IT managers and application developers as well as a high degree of compatibility and interoperability.
  • A rise in Internet of Things (IoT) and edge computing deployments which in turn, increase the number of connections that organizations and their IT departments must oversee and manage.
  • Seven-in-10 or 71%of survey respondents said aged hardware (3 ½+ years old) had a negative impact on server uptime and reliability compared with just 16% that said the older servers had not experienced any declines in reliability or availability. This is an increase of five percentage points from the 66% of those polled who responded positively to that survey question in the ITIC 2017 Reliability Survey and it’s a 27% increase from the 44% who said outmoded hardware negatively impacted uptime in the ITIC 2014 Reliability poll.

Corporations Minimum Reliability Requirements Rise

At the same time, corporations now require higher levels of reliability than they did even two o three years ago. The reliability and continuous operation of the core infrastructure and its component parts: server hardware, server operating system software, applications and other devices (e.g. firewalls, unified communications devices and uninterruptible power supply) are more crucial than ever to the organization’s bottom line.

It is clear that corporations – from the smallest companies with fewer than 25 people, to the largest multinational concerns with over one hundred thousand employees, are more risk averse and concerned about the potential risk for lawsuits and the damage to their reputation in the wake of an outage. ITIC’s survey data now indicates that an 84% majority of organizations now require a minimum of “four nines” – 99.99% reliability and uptime.

This is the equivalent of 52 minutes of unplanned outages related to downtime for mission critical systems and applications or just 4.33 minutes of unplanned monthly outage for servers, applications and networks.

Conclusions

The vendors are one-half of the equation. Corporate users also bear responsibility for the reliability of their servers and applications based on configuration, utilization, provisioning, management and security.

To minimize downtime and increase system and network availability it is imperative that corporations work with vendor partners to ensure that reliability and uptime are inherent features of all their servers, network connectivity devices, applications and mobile devices. This requires careful tactical and strategic planning to construct a solid strategy.

Human error and security are and will continue to pose the greatest threats to the underlying reliability and stability of server hardware, operating systems and applications. A key element of every firm’s reliability strategy and initiative is to obtain the necessary training and certification for IT managers, engineers and security professionals. Companies should also have their security professionals take security awareness training. Engaging the services of third party vendors to conduct security vulnerability testing to identify and eliminate potential vulnerabilities is also highly recommended.  Corporations must also deploy the appropriate Auditing, BI and network monitoring tools. Every 21st Century network environment needs continuous, comprehensive end-to-end monitoring for their complex, distributed applications in physical, virtual and cloud environments.

Ask yourself: “How much reliability does the infrastructure require and how much risk can the company safely tolerate?”

ITIC Poll: Human Error and Security are Top Issues Negatively Impacting Reliability Read More »

ITIC 2018 Server Reliability Mid-Year Update: IBM Z, IBM Power, Lenovo System x, HPE Integrity Superdome & Huawei KunLun Deliver Highest Uptime

August 8, 2018

For the tenth straight year, IBM and Lenovo servers again achieved top rankings in ITIC’s 2017 – 2018 Global Server Hardware and Server OS Reliability survey.

IBM’s Z Systems Enterprise server is in a class of its own. The IBM mainframe continues to exhibit peerless reliability besting all competitors. The Z recorded less than 10 seconds of unplanned per server downtime each month. Additionally less than one-half of one percent of all IBM Z customers reported unplanned outages that totaled greater than four (4) hours of system downtime in a single year.

Among mainstream servers, IBM Power Systems 7 and 8 and the Lenovo x86 X6 mission critical hardware consistently deliver the highest levels of reliability/uptime among 14 server hardware and 11 different mainstream server hardware virtualization platforms. Each platform averaged just 2.1 minutes of unplanned per annum/per server downtime (See Exhibit 1).

That makes the IBM Power Systems and Lenovo x 86 servers approximately 17 to 18 times more stable and available, than the least reliable distributions – the rival Oracle and HPE ProLiant servers.

Additionally, the latest ITIC survey results indicate just one percent of IBM Power Systems and Lenovo System x servers experienced over four (4) hours of unplanned annual downtime. This is the best showing among the 14 different server platforms surveyed.

ITIC’s 10th annual independent ITIC 2017 – 2018 Global Server Hardware and Server OS Reliability survey polled 800 organizations worldwide from August through December 2017.  In order to obtain the most accurate and unbiased results, ITIC accepted no vendor sponsorship. …

ITIC 2018 Server Reliability Mid-Year Update: IBM Z, IBM Power, Lenovo System x, HPE Integrity Superdome & Huawei KunLun Deliver Highest Uptime Read More »

Hourly Downtime Tops $300K for 81% of Firms; 33% of Enterprises Say Downtime Costs >$1M

The cost of downtime continues to increase as do the business risks. An 81% majority of organizations now require a minimum of 99.99% availability. This is the equivalent of 52 minutes of unplanned outages related to downtime for mission critical systems and applications or ,just 4.33 minutes of unplanned monthly outage for servers, applications and networks.                                         

Over 98% of large enterprises with more than 1,000 employees say that on average, a single hour of downtime per year costs their company over $100,000, while an 81% of organizations report that the cost exceeds $300,000. Even more significantly: three in 10 enterprises – 33% – indicate that hourly downtime costs their firms $1 million or more (See Exhibit 1). It’s important to note that these statistics represent the “average” hourly cost of downtime.  In a worst case scenario – if any device or application becomes unavailable for any reason the monetary losses to the organization can reach millions per minute. Devices, applications and networks can become unavailable for myriad reasons. These include: natural and man-made catastrophes; faulty hardware; bugs in the application; security flaws or hacks and human error. Business-related issues, such as a Regulatory Compliance related inspection or litigation, can also force the organization to shutter its operations. For whatever the reason, when the network and its systems are unavailable, productivity grinds to a halt and business ceases.

Highly regulated vertical industries like Banking and Finance, Food, Government, Healthcare, Hospitality, Hotels, Manufacturing, Media and Communications, Retail, Transportation and Utilities must also factor in the potential losses related to litigation as well as civil penalties stemming from organizations’ failure to meet Service Level Agreements (SLAs) or Compliance Regulations. Moreover, for a select three percent of organizations, whose businesses are based on high level data transactions, like banks and stock exchanges, online retail sales or even utility firms, losses may be calculated in millions of dollars per minute. …

Hourly Downtime Tops $300K for 81% of Firms; 33% of Enterprises Say Downtime Costs >$1M Read More »

Cost of Hourly Downtime Soars: 81% of Enterprises Say it Exceeds $300K On Average

The only good downtime is no downtime.

ITIC’s latest survey data finds that 98% of organizations say a single hour of downtime costs over $100,000; 81% of respondents indicated that 60 minutes of downtime costs their business over $300,000. And a record one-third or 33% of enterprises report that one hour of downtime costs their firms $1 million to over $5 million.

For the fourth straight year, ITIC’s independent survey data indicates that the cost of hourly downtime has increased. The average cost of a single hour of unplanned downtime has risen by 25% to 30% rising since 2008 when ITIC first began tracking these figures.

In ITIC’s 2013 – 2014 survey, just three years ago, 95% of respondents indicated that a single hour of downtime cost their company $100,000.  However, just over 50% said the cost exceeded $300,000 and only one in 10 enterprises reported hourly downtime costs their firms $1million or more. In ITIC’s latest poll three-in-10 businesses or 33% of survey respondents said that hourly downtime costs top $1 million or even $5 million.

Keep in mind that these are “average” hourly downtime costs. In certain use case scenarios — such as the financial services industry or stock transactions the downtime costs can conceivably exceed millions per minute. Additionally, an outage that occur in peak usage hours may also cost the business more than the average figures cited here. …

Cost of Hourly Downtime Soars: 81% of Enterprises Say it Exceeds $300K On Average Read More »

IBM z13s Delivers Power, Performance, Fault Tolerant Reliability and Security for Hybrid Clouds

Security. Reliability. Performance. Analytics. Services.

These are the most crucial considerations for corporate enterprises in choosing a hardware platform. The underlying server hardware functions as the foundational element for the business’ entire infrastructure and interconnected environment. Today’s 21st century Digital Age networks are characterized by increasingly demand-intensive workloads; the need to use Big Data analytics to analyze and interpret the massive volumes and variety of data to make proactive decisions and keep the business competitive. Security is a top priority. It’s absolutely essential to safeguard sensitive data and Intellectual Property (IP) from sophisticated, organized external hackers and defend against threats posed by internal employees.

The latest IBM z13s enterprise server delivers embedded security, state-of-the-art analytics and unparalleled reliability, performance and throughput. It is fine tuned for hybrid cloud environments. And it’s especially useful as a secure foundational element in Internet of Things (IoT) deployments. The newly announced, z13s is highly robust: it supports the most compute-intensive workloads in hybrid cloud and on-premises environments. The newest member of the z Systems family, the z13s, incorporates advanced, embedded cryptography features in the hardware that allow it to encrypt and decrypt data twice as fast as previous generations, with no reduction in transactional throughput owing to the updated cryptographic coprocessor for every chip core and tamper-resistant hardware-accelerated cryptographic coprocessor cards. …

IBM z13s Delivers Power, Performance, Fault Tolerant Reliability and Security for Hybrid Clouds Read More »

Scroll to Top