Security

De-mystifying Cloud Computing: the Pros and Cons of Cloud Services

 

Cloud computing has been a part of the corporate and consumer lexicon for the past 15 years. Despite this, many organizations and their users are still fuzzy on the finer points of cloud usage and terminology.

De-mystifying the cloud

So what exactly is a cloud computing environment?

The simplest and most straightforward definition is that a cloud is a grid or utility style pay-as-you-go computing model that uses the web to deliver applications and services in real-time.

Organizations can opt to deploy a private cloud infrastructure where they host their services on-premise from behind the safety of the corporate firewall. The advantage here is that the IT department always knows what’s going on with all aspects of the corporate data from bandwidth and CPU utilization to all-important security issues.

Alternatively, organizations can choose a public cloud deployment in which a third party vendor like Amazon Web Services (AWS), Microsoft Azure, Google Cloud, IBM Cloud, Oracle Cloud and other third parties host the services at an off-premises remote location. This scenario saves businesses money and manpower hours by utilizing the host provider’s equipment and management. All that’s needed is a web browser and a high-speed internet connection to connect to the host to access applications, services and data.

However, the public cloud infrastructure is also a shared model in which corporate customers share bandwidth and space on the host’s servers. Enterprises that prioritize privacy and require near impenetrable security and those that require more data control and oversight, typically opt for a private cloud infrastructure in which the hosted services are delivered to the corporation’s end users from behind the safe confines of an internal corporate firewall. However, a private cloud is more than just a hosted services model that exists behind the confines of a firewall. Any discussion of private and/or public cloud infrastructure must also include virtualization. While most virtualized desktop, server, storage and network environments are not yet part of a cloud infrastructure, just about every private and public cloud will feature a virtualized environment.

Organizations contemplating a private cloud also need to ensure that they feature very high (near fault tolerant) availability with at least “five nines” or “six nines – 99.999% or 99.9999% and even true fault tolerant “seven nines” – 99.99999% uptime to ensure uninterrupted operations.

Private clouds should also be able to scale dynamically to accommodate the needs and demands of the users. And unlike most existing, traditional datacenters, the private cloud model should also incorporate a high degree of user-based resource provisioning. Ideally, the IT department should also be able to track resource usage in the private cloud by user, department or groups of users working on specific projects for chargeback purposes. Private clouds will also make extensive use of AI, analytics, business intelligence and business process automation to guarantee that resources are available to the users on demand.

All but the most cash-rich organizations (and there are very few of those) will almost certainly have to upgrade their network infrastructure in advance of migrating to a private cloud environment. Organizations considering outsourcing any of their datacenter needs to a public cloud will also have to perform due diligence to determine the bona fides of their potential cloud service providers.

In 2022 and beyond, a hybrid cloud environment is the most popular model, chosen by over 75% of corporate enterprises. The hybrid cloud theoretically gives businesses the best of both worlds: with some services and applications being hosted on a public cloud while other specific, crucial business applications and services in a private or on-premises cloud behind a firewall.

Types of Cloud Computing Services

There are several types of cloud computing models. They include:

  • Software as a Service (SaaS) which utilizes the Internet to deliver software applications to customers. Examples of this are Salesforce.com, which has one of the most popular, widely deployed, and the earliest cloud-based CRM application and Google Apps, which is among the market leaders. Google Apps comes in three editions—Standard, Education and Premier (the first two are free). It provides consumers and corporations with customizable versions of the company’s applications like Google Mail, Google Docs and Calendar.
  • Platform as a Service (PaaS) offerings; examples of this include the above-mentioned Amazon Web Services and Microsoft’s top tier Azure Platform. The Microsoft Azure offering contains all the elements of a traditional application stack from the operating system up to the applications and the development framework. It includes the Windows Azure Platform AppFabric (formerly .NET Services for Azure) as well as the SQL Azure Database service. Customers that build applications for Azure will host it in the cloud. However, it is not a multi-tenant architecture meant to host your entire infrastructure. With Azure, businesses rent resources that will reside in Microsoft datacenters. The costs are based on a per usage model. This gives customers the flexibility to rent fewer or more resources depending on their business needs.
  • Infrastructure as a Service (IaaS) is exactly what its name implies: the entire infrastructure becomes a multi-tiered hosted cloud model and delivery mechanism. Public, private and hybrid should all be flexible and agile. The resources should be available on demand and should be able to scale up or scale back as business needs dictate.
  • Serverless This is a more recent technology innovation. And it can be a bit confusing to the uninitiated. A Serverless cloud is a cloud-native development model that enables cloud   developers to build and run applications without having to manage servers. The developers do not manage, provision or maintain the servers when deploying code. The actual code execution is fully managed by the cloud provider, in contrast to the traditional method of writing and developing applications and then deploying them on a servers. To be clear, there are still servers in a serverless model, but they are abstracted away from application development.

Cloud computing—pros and cons

Cloud computing like any technology is not a panacea. It offers both potential benefits as well possible pratfalls. Before beginning any infrastructure upgrade or migration, organizations are well advised to gather all interested parties and stakeholders and construct a business plan that best suits their organization’s needs and budget. When it comes to the cloud, there are no absolutes. Many organizations will have hybrid clouds that include public and private cloud networks. Additionally, many businesses may have multiple cloud hosting providers present in their networks. Whatever your firm’s specific implementation it’s crucial to create a realistic set of goals, a budget and a deployment timetable.

Prior to beginning any technology migration organizations should first perform a thorough inventory and review of their existing legacy infrastructure and make the necessary upgrades, revisions and modifications. All stakeholders within the enterprise should identify the company’s current tactical business goals and map out a two-to-five year cloud infrastructure and services business plan. This should incorporate an annual operational and capital expenditure budget. The migration timetable should include server hardware, server OS and software application interoperability and security vulnerability testing; performance and capacity evaluation and final provisioning and deployment.

Public clouds—advantages and disadvantages

The biggest allure of a public cloud infrastructure over traditional premises-based network infrastructures is the ability to offload the tedious and time consuming management chores to a third party. This in turn can help businesses:

 Shave precious capital expenditure monies because they avoid the expensive investment in new equipment including hardware, software and applications as well as the attendant configuration planning and provisioning that accompanies any new technology rollout.

Accelerated deployment timetable. Having an experienced third party cloud services provider do all the work also accelerates the deployment timetable and most likely means less time spent on trial and error.

Construct a flexible, scalable cloud infrastructure that is tailored to their business needs. A company that has performed its due diligence and is working with an experienced cloud provider can architect a cloud infrastructure that will scale up or down according to the organization’s business and technical needs and budget.

 

Public Cloud Downsides

Shared Tenancy: The potential downside of a public cloud is that the business is essentially “renting” or sharing common virtualized servers and infrastructure tenancy with other customers. This is much like being a tenant in a large apartment building. Depending on the resources of the particular cloud model, there exists the potential for performance, latency and security issues as well as acceptable response, and service and support from the cloud provider.

Risk: Risk is another potential pitfall associated with outsourcing any of your firm’s resources and services to a third party. To mitigate risk and lower it to an acceptable level, it’s essential that organizations choose a reputable, experienced third party cloud services provider very carefully. Ask for customer references. Cloud services providers must work closely and transparently with the corporation to build a cloud infrastructure that best suits the business’ budget, technology and business goals. To ensure that the expectations of both parties are met, organizations should create a checklist of items and issues that are of crucial importance to their business and incorporate them into service level agreements (SLAs). Be as specific as possible. These should include but are not limited to:

  • What types of equipment do they use?
  • How old is the server hardware? Is the configuration powerful enough?
  • How often is the data center equipment/infrastructure upgraded?
  • How much bandwidth does the provider have?
  • Does the service provider use open standards or is it a proprietary datacenter?
  • How many customers will you be sharing data/resources with?
  • Where is the cloud services provider’s datacenter physically located?
  • What specific guarantees, if any, will it provide for securing sensitive data?
  • What level of guaranteed response time will it provide for service and support?
  • What is the minimum acceptable latency/response time for its cloud services?
  • Will it provide multiple access points to and from the cloud infrastructure?
  • What specific provisions will apply to Service Level Agreements (SLAs)?
  • How will financial remuneration for SLA violations be determined?
  • What are the capacity ceilings for the service infrastructure?
  • What provisions will there be for service failures and disruptions?
  • How are upgrade and maintenance provisions defined?
  • What are the costs over the term of the contract agreement?
  • How much will the costs rise over the term of the contract?
  • Does the cloud service provider use the Secure Sockets Layer (SSL) and state of the art AES encryption to transmit data?
  • Does the cloud services provider encrypt the resting data to prohibit and restrict access?
  • How often does the cloud services provider perform audits?
  • What mechanisms will it use to quickly shut down a hack, and can it track a hacker?
  • If your cloud services provider is located outside your country of origin, what are the privacy and security rules of that country and what impact will that have on your firm’s privacy and security issues?

Finally, the corporation should appoint a liaison who meets regularly with the designated counterpart at the cloud services provider. While a public cloud does provide managed hosting services,  that does not mean the company should forget about it as though their data assets really did reside in an amorphous cloud! Regular meetings between the company and its cloud services provider will ensure that the company attains its immediate goals and that it is always aware and working on future technology and business goals. It will also help the corporation to understand usage and capacity issues and ensure that its cloud services provider(s) meets SLAs. Outsourcing any part of your infrastructure to a public cloud does not mean forgetting and abandoning it.

Private clouds—advantages and disadvantages

The biggest advantage of a private cloud infrastructure is that your organization retains control of its corporate assets and can safeguard and preserve its privacy and security. Your organization is in command of its own destiny. That can be a double-edged sword.

Before committing to build a private cloud model the organization must do a thorough assessment of its current infrastructure, its budget, and the expertise and preparedness of its IT department. Is your firm ready to assume the responsibility for such a large burden from both a technical and ongoing operational standpoint? Only you can answer that. Remember that the private cloud should be highly reliable and highly available—at least 99.999% uptime with built-in redundancy and failover capabilities. Many organizations struggle to attain and maintain 99.99% uptime and reliability which is the equivalent of 8.76 hours of per server, per annum downtime. When your private cloud is down for any length of time, your employees, business partners, customers and suppliers will be unable to access resources.

Private Cloud Downsides

The biggest potential upside of a private cloud is also potentially it’s biggest disadvantage. Namely: that the onus falls entirely on the corporation to achieve the company’s performance, reliability and security goals. To do so, the organization must ensure that its IT administrators and security professionals are up to date on training and certification. To ensure optimal performance, the company must regularly upgrade and rightsize its servers and stay current on all versions of mission critical applications – particularly with respect to licensing, compliance and installing the latest patches and fixes. Security must be a priority! Hackers are professionals. And hacking is big business. The hacks themselves — ransomware, Email phishing scams, CEO fraud etc. are more pervasive and more pernicious. And the cost of hourly downtime is more expensive than ever. ITIC’s latest survey data shows that 91% of midsize and large enterprises estimate that the average cost of a single hour of downtime is $300,000 or more. These statistics are just industry averages. They do not include any additional costs a company may incur due to penalties associated with civil or criminal litigation or compliance penalties. In other words: in a private cloud, the buck stops with the corporation.

Realistically, in order for an organization to successfully implement and maintain a private  cloud, it needs the following:

  • Robust equipment that can handle the workloads efficiently during peak usage times.
  • An experienced, trained IT staff that is familiar with all aspects of virtualization, virtualization management, grid, utility and chargeback computing models.
  • An adequate capital expenditure and operational expenditure budget.
  • The right set of private cloud product offerings and service agreements.
  • Appropriate third party virtualization and management tools to support the private cloud.
  • Specific SLA agreements with vendors, suppliers and business partners.
  • Operational level agreements (OLAs) to ensure that each person in the organization is responsible for specific routine tasks and in the event of an outage.
  • A disaster recovery and backup strategy.
  • Strong security products and policies.
  • Efficient chargeback utilities, policies and procedures.

Other potential private cloud pitfalls include; deciding which applications to virtualize, vendor lock-in and integration, and interoperability issues. Businesses grapple with these same issues today in their existing environments.

Conclusions

Hybrid, public and private cloud infrastructure deployments will continue to experience double digit growth for the foreseeable future. The benefits of cloud computing will vary according to individual organization’s implementation. Preparedness and prior to deployment are crucial. Cloud vendors are responsible for maintaining performance, reliability and security. However, corporate enterprises cannot simply cede total responsibility to their vendor partners because the data assets are housed off-premises. Businesses must continue to perform their due diligence. All appropriate corporate enterprise stakeholder must regularly review and monitor performance and capacity; security; compliance and SLA results – preferably on a quarterly or semi-annual basis. This will ensure your organization achieves the optimal business and technical benefits. Keeping a watchful eye on security is imperative. Cloud vendors and businesses must work in concert as true business partners to achieve optimal TCO and ROI and mitigate risk.

De-mystifying Cloud Computing: the Pros and Cons of Cloud Services Read More »

The Cloud Gets Crowded and more Competitive

The cloud is getting crowded.

In 2022 the cloud computing market – particularly the hybrid cloud – is hotter and more competitive than ever.

Corporate enterprises are flocking to the cloud as a way to offload onerous IT administrative tasks and more easily and efficiently manage increasingly complex infrastructure, storage and security. Migrating operations from the data center to the cloud can also greatly reduce their operational and capital expenditure costs.

Cloud vendors led by market leaders like Amazon Web Services (AWS), Microsoft Azure, Google Cloud, IBM Cloud, Oracle Cloud Infrastructure, SAP, Salesforce, Rackspace Cloud, and VMware, as well as China’s Alibaba and Huawei Cloud, are all racing to meet demand. The current accelerated shift to the cloud was fueled by the COVID-19 global pandemic which created supply chain disruptions and upended many aspects of traditional work life. Since 2020, government agencies, commercial businesses and schools shifted to remote working and learning. Although COVID is generally waning (albeit with continuing flare-ups), a hybrid work environment is the new normal. This in turn, makes a compelling business case for furthering cloud migrations.

In 2022, more than $1.3 trillion in enterprise IT spending is at stake from the shift to cloud, and that revenue will increase to nearly $1.8 trillion by 2025 according to the February 2022 report “Market Impact: Cloud Shift – 2022 Through 2025” by Gartner, Inc. in Stamford, Conn.  Furthermore, Gartner’s latest research forecasts that enterprise IT spending on public cloud computing, within addressable market segments, will outpace traditional IT spending in 2025.

Hottest cloud trends in 2022

Hybrid Clouds

Hybrid cloud is exactly what its name implies: it’s a combination of public, private and dedicated on-premises datacenter infrastructure and applications. Companies can adopt a hybrid approach for specific use cases and applications – outsourcing some portions of their operations to a hosted cloud environment, while keeping others onsite. This approach lets companies continue to leverage and maintain their legacy data infrastructure as they migrate to the cloud.

Cloud security and compliance: There is no such thing as too much security. ITIC’s 2022 Global Server Hardware Security survey indicates that businesses experienced an 84% surge in security incidents like ransomware, email phishing scams and targeted data breaches over the last two years that were especially prevalent and commonplace. The hackers are extremely sophisticated; they choose their targets with great precision with the intent to inflict maximum damage and net the biggest payback. This trend shows no signs of abating. In 2021, the average cost of a successful data breach increased to $4.24 million (USD); this is a 10% increase from $3.86 million in 2020 according to the 2021 Cost of a Data Breach Study, jointly conducted by IBM and the Ponemon Institute. The $4.24 million average cost of a single data breach is the highest number in the 17 years since IBM and Ponemon began conducting the survey. It represents an increase of 10% in the last 12 months and 20% over the last two years. Not surprisingly, in 2021, 61% of malware directed at enterprises targeted remote employees via cloud applications. Any security breach will have a domino effect on regulatory compliance. In response, cloud vendors are doubling down on security capabilities and compliance certifications. There is now a groundswell of demand for Secure Access Service Edge (SASE) cloud security architecture designed to safeguard, monitor and access connectivity among myriad cloud applications services, as well as datacenter IT infrastructure and end user devices. SASE gives users a single sign-on capability across multiple cloud applications while ensuring compliance.

Cloud-based disaster recovery (DR): The ongoing concerns around security and compliance issues has also shone the spotlight on the importance of cloud-based disaster recovery. DR uses cloud computing to back up data and continue to run the necessary business processes in case of disaster. Organizations can utilize cloud-based DR for load balancing and to replicate cloud services across multiple cloud environments and providers. The result: enterprise transactions will continue uninterrupted if they lose access to their physical infrastructure in the event of an outage.

Cloud-based Artificial Intelligence (AI) and Machine Learning (ML): Another hot cloud trend is the use of Artificial Intelligence (AI) and Machine Learning (ML). Both AI and ML allow organizations to cut through the data deluge and process and analyze the data to make informed business decisions and quickly respond to current and future market trends.

Top cloud vendors diversify, differentiate their offerings

There are dozens of cloud providers with more entering this lucrative market arena all the time. However, the top four vendors: Amazon AWS, Microsoft Azure, Google Cloud and IBM Cloud currently account for over 70% of the installed base.

Amazon AWS: Amazon AWS has been the undisputed cloud market leader for the past decade. And it remains the number one vendor in 2022. Simply put, Amazon is everywhere and it has amazing brand recognition. Amazon AWS offers a wide array of services that appeal to companies of all sizes. The AWS cloud-based platform enables companies to build customized business solutions using integrated Web services. AWS also offers a broad portfolio of Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS).  These include Elastic Cloud Compute (EC2), Elastic Beanstalk, Simple Storage Service (S3) and Relational Database Service (RDS). AWS also enables organizations to customize their infrastructure requirements and it provides them with a wide variety of administrative controls via its secure Web-based client. Other key features include: data backup and long-term storage; Service Level Agreement (SLA) of “four nines” – 99.99% – guaranteed SLA uptime;  AI and ML capabilities; automatic capacity scaling; support for virtual private clouds and free migration tools.

As with all of the cloud vendors, the devil is in the details when it comes to pricing and cost. On the surface, the pricing model appears straightforward. AWS offers three different pricing options. They are “Pay as you Go,” “Save when you reserve” and “Pay less using more.”  AWS also offers a free 12-month plan. Once the trial period has expired, the customer must either choose a paid plan or cancel its AWS subscription. While Amazon does provide a price calculator to estimate potential cloud costs, the many variables make it confusing to discern.

Microsoft Azure: Microsoft Azure ranks close behind Amazon AWS and the platform has been the catalyst for the Redmond, Washington software giant’s resurgence over the last 12 years. As Microsoft transitioned away from its core Windows-based business model, it used a tried and true success strategy: that is, the integration and interoperability of its various software offerings.  Microsoft also moved its popular and well-entrenched legacy on-premises software application suites like Microsoft Office, SharePoint, SQL Server and others to the cloud. This gave customers a sense of confidence and familiarity when it came to adoption. Microsoft also boasts one of the tech industry’s largest partner ecosystem. Microsoft regularly refreshes and updates its cloud portfolio. In February, Microsoft unveiled three industry-specific cloud offerings: Microsoft Cloud for Financial Services, Microsoft Cloud for Manufacturing and Microsoft Cloud for Nonprofit. All of these services leverage the company’s security and AI functions. For example,  new feature in Microsoft Cloud for Financial Services, called Loan Manager will enable lenders to close loans faster by streamlining workflows and increasing transparency through automation and collaboration.  Microsoft Azure offers all the basic and advanced cloud features and functions including: data backup and storage; business continuity and DR solutions; capacity planning; business analytics; AI and ML; single sign-on (SSO) and multifactor authentication as well as serverless computing. Ease of configuration and management are among its biggest advantages, and Microsoft does an excellent job of regularly updating the platform, but documentation and patches may lag a bit. Azure also offers a 99.95% SLA uptime guarantee which is a bit less than “four nines.”  Again, the biggest business challenge for existing and prospective Azure customers is figuring out the licensing and pricing model to get the best deal.

Google Cloud Platform (GCP): Like Amazon, Google is a ubiquitous entity with strong brand name recognition. Google touts its ability to enable customers to scale their business as needed using flexible, open technology. Google Cloud consists of over 150 products and developer tools. GCP is a suite of cloud computing services provided by Google. It is a public cloud computing platform consisting of a variety of IaaS and PaaS services like compute, storage, networking, application development and Big Data analytics. The GCP services all run on the same cloud infrastructure that Google uses internally for its end-user products, such as Google Search, Photos, Gmail and YouTube, etc. The GCP services can be accessed by software developers, cloud administrators and IT professionals over the internet or through a dedicated network connection. Notably, Google developed Kubernetes, an open source container standard that automates software deployment, scaling and management. GCP offers a wide array of cloud services including: storage and backup, application development, API management, virtual private clouds, monitoring and management services, migration tools, AI and ML. In order to woo customers, Google does offer very steep discounts and flexible contracts.

IBM: It’s no secret that IBM Cloud lagged behind market leaders AWS and Microsoft Azure, but Big Blue shifted into overdrive to close the gap. Most notably, IBM’s 2019 acquisition of Red Hat for $34 billion gave IBM much needed momentum, solidifying its hybrid cloud foundation and expanding its global cloud reach to 175 countries with over 3,500 hybrid cloud customers. And it shows. On April 19, IBM told Wall Street it expects to hit the top end of its revenue growth forecast for 2022. IBM’s Cloud & Data Platforms unit is the growth driver Cloud revenue grew 14% to $5 billion during the just ended March 31 quarter. Software and consulting sales which represent over 70% of IBM’s business were up 12% and 13%, respectively. IBM Cloud incorporates a host of cloud computing services that run on IaaS or PaaS.  And the Red Hat Open Shift platform further fortifies IBM’s hybrid cloud initiatives. Open Shift is an enterprise-ready Kubernetes container platform built for an open hybrid cloud strategy. It provides a consistent application platform to manage hybrid cloud, multicloud, and edge deployments. According to IBM, 47 of the Fortune 50 companies use IBM as their private cloud provider.  IBM has upped its cloud game with several key technologies. They include advanced quantum safe cryptography which safeguards applications running on the IBM z16 mainframe which is popular with high end IBM enterprise customers. Quantum-safe cryptography is as close to unbreakable or impenetrable encryption as a system can get. It uses quantum mechanics to secure and transmit data in a way that currently makes it near-impossible to hack. Another advanced feature is the AI on-chip inferencing, available on the newly announced IBM z16 mainframe. It can deliver up to 300 billion deep learning inference operations per day with 1ms response time. This will enable even non-data scientist customers to cut through the data deluge and predict and automate for “increased decision velocity.”  AI on-chip inferencing can help customers prevent fraud before it happens by scoring up to 100% of transactions in real-time without impacting Service Level Agreements (SLAs). AI on-chip inferencing can also assist companies with compliance; automating the process to allow firms to cut audit preparation time from one month to one week to maintain compliance and avoid fines and penalties. The IBM Cloud also incorporates the Keep Your Own Key (KYOK) which uses z Hyperprotect in the IBM public cloud.  Another key security differentiator is IBM’s Confidential Computing which protects sensitive data by performing computation in a hardware-based trusted execution environment (TEE). IBM Cloud goes beyond confidential computing by protecting data across the entire compute lifecycle. This provides customers with a higher level of privacy assurance – giving them complete authority over data at rest, data in transit and data in use. IBM further distinguishes its IBM Cloud from competitors via its extensive work in supporting and securing regulated workloads, particularly for Financial Services companies. The company’s Power Systems enterprise servers are supported in the IBM Cloud as well. IBM Cloud also offers full server customization; everything included in the server is handpicked by the customer so they don’t have to pay for features they may never use. IBM is targeting its Cloud offering at customers that want a hybrid, highly secure, open, multi-cloud and manageable environment.

Conclusions

Cloud computing adoption – most especially the hybrid cloud model – will continue to accelerate throughout 2022 and beyond. At the same time, vendors will continue to promote AI, machine learning and analytics as advanced mechanisms to help enterprises derive immediate, greater value and actionable insights to drive revenue and profitability.

Security and compliance issues will also be must-have crucial elements of every cloud deployment. Organizations now demand a minimum of four nines of uptime – and preferably, five and six nines of availability – 99.999% and 99.9999% to ensure uninterrupted business continuity. Vendors, particularly IBM with its newly Quantum-safe cryptography capabilities for its infrastructure and IBM Z mainframe, will continue to fortify cloud security and deploy AI.

 

 

The Cloud Gets Crowded and more Competitive Read More »

Security, Data Breaches Top Cause of Downtime in 2022

A 76% majority of corporations cite Security and Data Breaches as the top cause of server, operating system, application and network downtime, according to ITIC’s latest 2022 Global Server Hardware Security survey which polled 1,300 businesses worldwide.

Security is a technology and business issue that impacts all enterprises. Some 76% of respondents cited security and data breaches as the greatest threat to server, application, data center, network edge and cloud ecosystem stability and reliability (See Exhibit 1). This is a three-fold increase from the 22% of ITIC corporate survey respondents who said security negatively impacted server and network uptime reliability in 2012. The hacks are more targeted, pervasive and pernicious. They are also more expensive and designed to inflict maximum damage and losses on their enterprise and consumer victims.

 

 

Security has a major impact on businesses of all sizes and across all vertical markets. In 2022 nine-in-10 companies estimate that server hardware and server OS security have a significant impact on overall network reliability and daily transactions (See Exhibit 2).

Mean Time to Detection is a Critical Barometer

 

Security hacks and data breaches are a fact of doing business in the digital age.  It’s BIG business for hackers and cyber criminals. At some point, every organization and its critical main line of business servers, server operating systems and applications will be the victims of an attempted or successful data breach of some type.

Data Breaches and Downtime Costs Soar

In 2021 the average cost of a successful data breach increased to $4.24 million (USD); this is a 10% increase from $3.86 million in 2020, according to the 2021 Cost of a Data Breach Study, jointly conducted by IBM and the Ponemon Institute. The $4.24 million average cost of a single data breach is the highest number in the 17 years since IBM and Pokemon began conducting the survey. It represents an increase of 10% in the last 12 months and 20% over the last two years.

The FBI’s 2021 Internet Crime Report, released in March 2022, found that Internet cyber crimes cost Americans $6.9 billion last year. This is more than triple the $2 billion in losses reported in 2020. According to the FBI, it received 847,376 complaints of suspected internet crime; this is a seven percent (7%) compared to 2020.

The FBI 2021 Internet Crime Report said the top three cyber crimes reported by victims in 2021 were: “phishing scams, non-payment/non-delivery scams, and personal data breach. Victims lost the most money to business email compromise scams, investment fraud, and romance and confidence schemes.”

ITIC’s 2022 Global Server Hardware Security survey findings underscore the expensive nature of cyber crime. ITIC’s latest research shows the Hourly Cost of Downtime now exceeds $300,000 for 91% of SME and large enterprises. Overall, 44% of mid-sized and large enterprise survey respondents reported that a single hour of downtime, can potentially cost their businesses over one million ($1 million).

Organizations must rely on strong embedded server and infrastructure security that recognizes the danger, sends alerts and alarms and that possess the ability to isolate the threats. Strong preparedness on the part of the corporation and having a well trained staff of security professionals and IT administrators are of paramount importance.

The more quickly the company’s servers and software can detect a security issue and respond to it, the greater the chances of isolating and thwarting the attack before it can infiltrate the network ecosystem, interrupt data transactions and daily operations and access sensitive data and IP.

Robust security is comprised of two things: solid security products AND strong security policies and procedures administered and monitored by proactive and trained security professionals.

 

Security, Data Breaches Top Cause of Downtime in 2022 Read More »

IBM’s New z16 Aims for the Cloud; Delivers Quantum-safe Cryptography & AI on-Chip Inferencing

IBM has once again outdone itself with its latest z16 mainframe server.

This latest offering has it all: unbreakable security; fast low-latency performance; top notch, easy-to-use analytics and true fault tolerant reliability that provides the lowest total cost of ownership (TCO) and immediate return on investment (ROI) among 15 mainstream servers, industrywide.

The new IBM z16 delivers a cornucopia of embedded, and enhanced functions. This includes hardened security, leading edge AI and performance improvements. The result: the z16 delivers even greater cost efficiencies and a solid “seven nines” – 99.99999% – of uptime and reliability. The AI on-chip inferencing is icing on the cake. It makes readily accessible for all employees — not just data scientists.

The IBM z16 is indisputably the most powerful enterprise system from the zSystems family, to date. It incorporates 7nm technology with clock speeds of 5.2GHz, and it supports a maximum of 200 cores and up to 40TB of memory. According to IBM, this results in 25% more processor capacity per drawer and an 11% per core performance improvement. Overall, IBM said the z16 will deliver 40% better performance than the prior z15 models. And it’s engineered for hybrid cloud environments and provides interoperability with a wide range of environments including Linux and open source.

As impressive as those performance statistics are, the immediate and strategic impact of the IBM z16 is far more than a laundry list of “speeds and feeds.”

In a pre-briefing with analysts Ross Mauri, General Manager of IBM zSystems and LinuxONE, said IBM designed the IBM z16 to address enterprise customers’ need for top notch system performance, resiliency, security/data privacy and protection; dedicated workload accelerators and optimization across IBM’s entire product stack. Barry Baker, VP of Product Management for IBM zSystems said the openness of the IBM z16 enterprise system in supporting multiple operating system environments including Linux, z/OS and a variety of open source distributions like Ubuntu is a win for customers.
Mauri and Baker said the IBM z16 delivers automation, predictive and security capabilities across environments to help enterprise customers on their journey to hybrid cloud and AI. “We are focused on the entire ecosystem. IBM’s strategy has the zSystems platform integrated throughout our products and services offerings to build more value to our clients,” Mauri said.

IBM’s z16 addresses all of the hot button issues confronting organizations in the digital age: AI; performance and low latency; resiliency/security; hybrid cloud; workload optimization; cost efficiencies; interoperability and application modernization.

IBM z16 Quantum Cryptographic Security and AI on-chip Inferencing
The IBM z16 also includes several ground-breaking technology “firsts.” Two of the most noteworthy are the AI on-chip inferencing function and the quantum-safe cryptographic security capability.
The AI on-chip inferencing, which is available at no extra cost, is “a game changer”, IBM executives said. It can deliver up to 300 billion deep learning inference operations per day with 1ms response time. IBM executives also said that the IBM z16’s accelerated on-chip AI “effectively eliminates” latency in inferencing. The result: businesses can cut through the data deluge and predict and automate for “increased decision velocity.” It enables even “non data scientist” customers and users to analyze data and derive insights at heretofore unprecedented speeds. Additionally, leveraging AI in routine daily operational processes can proactively assist businesses to take preventive actions, like identifying and stopping outages before they occur.

AI on-chip inferencing can assist customers in preventing fraud before it happens by scoring up to 100% of transactions in real-time without impacting Service Level Agreements (SLAs) and helps companies keep up to date on fast-changing regulatory issues. The AI on-chip inferencing can also assist companies with compliance; automating the process to allow firms to cut audit preparation time from one month to one week to maintain compliance and avoid fines and penalties.

On the security front, the IBM z16 takes the pervasive encryption introduced in the z14 model and z15 System Recovery Boost and turbo charges it with quantum-safe cryptographic security. The z14 pervasive encryption model provided security at every layer of the stack. The z15 System Recovery Boost capability allowed businesses to drastically reduce the time it takes to shutdown, restart and process the backlog that occurred during a system outage.

Quantum-safe cryptography is as close to unbreakable or impenetrable encryption as a system can get. It uses quantum mechanics to secure and transmit data in a way that cannot be hacked (at least not yet).

The IBM z16 is the preeminent mainstream server platform for digital enterprises requiring nothing less than seven nines – 99.99999% — best-in-class fault tolerant reliability; quantum cryptographic security and AI on-chip acceleration across multi platforms from datacenters to hybrid clouds and the network edge while delivering the lowest TCO and immediate ROI.

IBM’s New z16 Aims for the Cloud; Delivers Quantum-safe Cryptography & AI on-Chip Inferencing Read More »

ITIC 2021 Global Server Hardware, Server OS Reliability Survey Results

The technical and business challenges posed by the ongoing global pandemic didn’t compromise the core reliability of IBM, Lenovo, Huawei, Hewlett-Packard Enterprise and Cisco servers.

For the 13th straight year, IBM’s Z mainframe and mission critical Power servers achieved the highest server hardware reliability and delivered the strongest server security, among 15 different platforms, in ITIC’s annual 2021 Global Server Hardware, Server OS Reliability Survey.

And for the eighth consecutive year, Lenovo’s ThinkSystem servers again matched their best recorded uptime among all Intel x 86 servers along with Huawei’s KunLun and Fusion platforms. The HPE Superdome and the Cisco UCS hardware (in that order), rounded out the top five most reliable vendor hardware platforms (See Exhibit 1).

ITIC’s 2021 Global Server Hardware, Server OS Reliability independent Web-based survey, polled 1,200 corporations across 28 vertical market segments worldwide on the reliability, performance and security of the most popular server platforms from January through June 2021. Additionally, the preliminary findings from ITIC’s 2021 Global Reliability updated survey conducted from September through November 2021, indicate that the IBM Z, IBM Power servers; the Lenovo ThinkSystem and Huawei KunLun and Fusion servers continue to dominate and deliver the highest uptime, availability and security in datacenters and the cloud.

Among the top survey findings:

  • Server Reliability: IBM z14 and z15 outpaced all rivals, matching its best ever results: just 0.60 seconds of per server monthly unplanned downtime. The IBM Power models also equaled their best uptime scores over the last 13 years, with just 1.49 minutes of unplanned per server downtime. The Lenovo ThinkSystem and Huawei KunLun platforms followed closely, each with 1.51 minutes of unplanned per server outages. Inspur was in the middle of the pack with 11 minutes of unplanned per server downtime, while the Dell PowerEdge servers posted 26 minutes of unanticipated outages. Unbranded White box servers (which often run unlicensed or pirated software) again were the least reliable servers with 57 minutes of unplanned per server downtime; this is up two (2) minutes from 2020.
  • Server Availability: The IBM Z servers are in a class by themselves, a 94% majority of IBM Z customers said their businesses achieved unparalleled fault tolerant levels of six and seven nines – 99.9999% and 99.99999% reliability and continuous availability, the best among all server distributions. The IBM Power is close behind with 91% of customers reporting that the Power9 and latest Power10 models deliver a minimum of five and six nines availability/uptime. Meanwhile, 90% of Lenovo ThinkSystem, Huawei KunLun and HPE Superdome enterprises said their businesses achieve a minimum of five and six nines server availability.
  • Cost Effectiveness/Total Cost of Ownership: The most reliable IBM z14 and z15; IBM LinuxONE III and the PowerPower8 and PowerPower9 servers deliver the best TCO and near immediate Return on Investment (ROI). A single minute of per server unplanned downtime on an IBM z14 or z15 server, calculated at a rate of $100,000, costs enterprise customers $1,002. One minute of unplanned downtime on a single IBM Power8 and Power9 calculated at $100,000 an hour costs $2,488. The upcoming Power10, slated to ship in September will likely offer better reliability and lower costs even further. The Lenovo ThinkSystem and Huawei KunLun and Fusion offerings each averaged 1.51 minutes of unplanned per server outages; that equates to per server/per minute downtime charges of $2,521. Unbranded White box servers with 57 minutes of unplanned per server downtime could cost corporations $95,190 when hourly downtime losses are calculated at $100,000 (See Exhibit 3 and Exhibit 4).
  • Security hacks, user error and remote working/remote learning are the top three causes of unplanned downtime. A 73% majority of survey participants cited security as the number one cause of unplanned server downtime; 64% said human error caused unplanned server outages. Meanwhile, 58% of survey participants attributed increased downtime to management and security issues related to COVID-19 issues like remote working and remote academic learning via Zoom meetings for K-through-12 and college classes. While offices and schools were closed during the global pandemic during 2020 and much of 2021, IT and security administrators were hard pressed to effectively manage and secure remote PCs, laptops, notebooks and tablets. Consequently many employees and students did not adequately secure their personal devices. An April 2021 Fortune magazine article   noted that hybrid and remote workplace and academic environments created many positive opportunities for businesses and schools, but they also represent a potential boon for hackers.

 

In 2020, cybercriminals transmitted 61% of malware through cloud applications to target remote workers, according to the July 2021 Netskope Cloud and Threat Report  . The report said that utilizing cloud-based applications enables hackers to circumvent older, legacy Email and Web-based security solutions. The Netskope report further noted that security risks are exacerbated by the fact that 83% of employees and students access sensitive personal data via applications installed on corporate and academic devices e.g., laptops, notebooks and tablets. This can result in dire consequences in the connected digital era. To cite one example, in March 2020, the California State Controller’s Office, which handles $100 billion a year, suffered an email phishing attack on an employee that enabled cyber criminals cloud access to internal documents; once they gained entrance to the employee’s device they were able to successfully phish another 9,000 employees.

 

The reliability and security of server hardware, server operating systems and mission critical applications are critical elements of the core datacenter, network edge and cloud infrastructure.

 

Eighty-nine (89%) percent of organizations require a minimum of “four nines” – 99.99%  reliability to ensure uninterrupted daily business operations and secure data assets to sustain the company’s revenue stream and mitigate risk. And over one-third of organizations now strive for “five nines” 99.999% of uptime; this equals 5.25 minutes of per server unplanned downtime.

Each second and minute of server downtime and the associated mission critical applications costs the business money and raises transactional operations and monetary risks.

In the digital era of interconnected intelligent systems and networks, unplanned downtime of even a few minutes is expensive and disruptive and can reverberate across the entire ecosystem. This includes datacenters; virtualized public, private and hybrid clouds; remote work and learning environments and the intelligent network edge.

ITIC’s 2021 Hourly Cost of Downtime survey indicates a single hour of server downtime totals $300,000 or more for 91% percent of mid-sized enterprises (SMEs) and large enterprises. And among that 91% majority – nearly half or 44% – of corporations said, hourly outage costs exceed one million ($1M) to over five million ($5M).

ITIC 2021 Global Server Hardware, Server OS Reliability Survey Results Read More »

44% of enterprises say hourly downtime costs top $1 million—with COVID-19, security hacks and remote working as driving factors

https://techchannel.com/Enterprise/02/2019/cost-enterprise-downtime?microsite=HA-DR-For-Your-Business

44% of enterprises say hourly downtime costs top $1 million—with COVID-19, security hacks and remote working as driving factors

44% of enterprises say hourly downtime costs top $1 million—with COVID-19, security hacks and remote working as driving factors Read More »

IBM, Lenovo and Huawei Servers Most Secure, Suffer Fewest Hacks As COVID-19 Data Breaches Surge

IBM, Lenovo, Huawei, Hewlett-Packard Enterprise and Cisco hardware are the most secure and reliable servers. These platforms experienced the fewest successful hacks and recorded the least amount of unplanned downtime due to data breaches among mainstream servers in the last year.

Those are the results of the latest ITIC Global Server Hardware, Server OS Reliability and Security survey which polled over 1,000 businesses worldwide across 28 different vertical market sectors from October 2020 through March 2021.

The most recent ITIC survey statistics indicate that reliability and security are closely intertwined and even symbiotic. The top five most reliable server platforms: the IBM Z, the IBM Power Systems, Lenovo ThinkSystem, Huawei KunLun and Fusion Servers, the HPE Superdome Integrity and Cisco UCS (in that order) also boast the strongest security.

ITIC’s most recent Global Security poll similarly found that IBM, Lenovo, Huawei and HPE mission critical servers experienced the lowest percentages of downtime due to successful security hacks and data breaches.

The IBM Z mainframe outpaced all other server distributions and is in a class of its own as it achieved its most robust security and reliability ratings to date in the latest ITIC study.

Only a miniscule – 0.3% – of IBM Z high end servers, suffered a successful data breach. Among other mainstream hardware platforms, just four percent (4%) of IBM Power Systems and Lenovo ThinkSystem users reported their systems were successfully hacked, while five percent (5%) of Huawei KunLun and HPE Integrity Superdome server customers reported a security breach between March 2020 and April 2021.

Just over one-in-ten or 11% of Cisco UCS servers were successfully hacked. Cisco’s hardware performed extremely well, particularly when one considers that many of the UCS servers are deployed in remote locations and at the network edge, which frequently are the first line of defense and take the brunt of hack attacks.  Unbranded White box servers were the most vulnerable to security penetrations: 44% of ITIC survey respondents reported they were successfully hacked.

The global pandemic sparked a wave of COVID-19 related data breaches, ransomware, phishing, Business Email Compromise (BEC), CEO fraud and attacks that continue unabated.

Overall, ITIC’s survey findings indicate that there is a clear and widening gap in server hardware security and reliability among the top performing platforms and the most insecure offerings. The global pandemic sparked a wave of COVID-19 related data breaches, ransomware, phishing, Business Email Compromise (BEC), CEO fraud and attacks that continue unabated.

Security and reliability issues are closely intertwined: a successful data breach immediately compromises server, application and network uptime and availability. Security will likely persist as the chief threat that causes expensive unplanned downtime and outages.

Survey Highlights

Notably, despite a 31% spike in security hacks and data breaches during the COVID-19 pandemic over the last 16 months, IBM, Lenovo, Huawei, HPE and Cisco maintained their top positions as the most reliable and secure server platforms.

Additionally, the top five server distributions achieved the best security ratings of among all mainstream server hardware platforms in every security category in ITIC’s latest poll, including:

  • The least number of attempted security hacks/data breaches
  • The fewest number of successful security hacks/data breaches
  • The fastest Mean Time to Detection (MTTD) from the onset of the attack until the company isolated and shut it down

The strong security results posted by IBM, Lenovo, Huawei, HPE and Cisco (in that order) are especially noteworthy since they occurred during the height of the COVID-19 global pandemic. Some 31% of ITIC survey respondents said their servers, operating systems and critical business applications suffered successful penetrations by myriad security hacks and data breaches since the outset of COVID-19 in early 2020. This is an increase of 12 percentage points, up from the 19% in ITIC’s 2020 Global Server Hardware, Server OS Reliability survey.

Security is a core component of every organization’s network. Robust security is even more crucial in the COVID-19 era which ushered in a variety of new scams. Some 69% of organizations cited security and data breaches as the greatest threats to the reliability of server, application, data center, network edge and cloud ecosystems. The hacks themselves are more targeted, prevalent, pervasive and pernicious: They are designed to inflict maximum damage and losses on their enterprise and consumer victims.

Data Breaches are Big Business

Data breaches are big business and a primary business for the burgeoning professional hacking community. A successful hack is expensive on many levels. In 2020, the cost of a data breach averaged $3.86 million, according to the 2020 Cost of a Data Breach Study jointly conducted by IBM and the Ponemon Institute[1]. This represents a 10% increase since 2015.

ITIC’s latest survey data also indicates that the Hourly Cost of Downtime now exceeds $300,000 for 88% of businesses. Overall, 40% of mid-sized and large enterprise survey respondents reported that a single hour of downtime, costs their firms over one million ($1 million). A data breach that occurs during peak usage hours and interrupts crucial business operations can cost businesses millions per minute.

Besides the obvious monetary losses due to productivity and disrupted operations, businesses must factor in amount of manpower hours and the number of IT and security administrators involved in remediation efforts and full return to operation.  Companies must also determine whether or not any data or intellectual property (IP) was lost, stolen, damaged, destroyed or changed.  Organizations must also add in the cost of any litigation as well as potential civil or criminal fines/penalties associated with security incidents and data breaches.  Some costs, like damage to an organization’s reputation are incalculable and may result in lost business.

Hackers pick and choose their targets with great precision and are quick to take advantage of every opportunity. The COVID-19 pandemic is a prime example. Hackers immediately set their sights on teleworkers and remote learning students taking online and Zoom classes. They zeroed in on so-called “soft targets.” Local and state municipalities; small and mid-sized school districts, hospitals, health care clinics, doctors’ offices and branch bank offices that may lack full-time onsite security and IT administrators and may not have installed the latest security.

It’s no surprise that vendors like IBM, Lenovo, Huawei, HPE, which perennially achieve top server reliability ratings were also among the most secure hardware platforms.  These vendors and more recently Cisco, have made server security – and in Lenovo’s case server, PC and laptop security – a top priority and have invested heavily in bolstering the inherent security of their product offerings over the last several years. So when the Covid-19 pandemic hit, they already had strong, embedded security and this stood them and their customers in good stead.

The most secure server hardware platforms experienced the fewest successful security breaches. The IBM Z running the z/OS and RHEL Linux and IBM LinuxONE III respondents all said those platforms had no successful security hacks over the 16 months. They were followed by the IBM Power Systems and Linux ThinkSystem servers with one each; Huawei KunLun which averaged two hacks; the HPE Integrity with three successful penetrations and Cisco’s UCS servers with seven data breaches. The unbranded White box servers were the most porous, averaging 20 successful data breaches in the past 16 months.

Data breaches are big business. And they are expensive. The average cost of a data breach in 2020 is $3.86 million, according to the latest 2020 Cost of a Data Breach Study jointly conducted by IBM and the Ponemon Institute[2]. While the report indicates that the average data breach cost declined by a slight 1.5% compared with 2019’s study, the $3.86 million figure still represents a 10% increase since 2015.

A DTEX Systems Report found that “only 30% of organizations were prepared to secure a complete shift to remote work.”  The DTEX Systems study also found that almost 75% of organizations are concerned about the security risks introduced by users working from home and 73% of businesses admitted they have partial or no visibility into user activity if their VPN is disabled by remote workers. Another alarming finding is that teleworkers use their work laptops for personal use; with 25% of respondents acknowledging this increases the risk of drive-by-downloads, with 15% saying their firms are more susceptible to Phishing attacks.

 Conclusions and Recommendations

Security is now the number one issue that negatively undermines the reliability of server hardware, server OS and business critical applications. All organizations should make security a priority and work closely with their vendors to mitigate security risks to an acceptable level.

Every added second and minute of server downtime and application unavailability negatively impacts business operations, employee productivity and revenue.

No server platform, server OS or business application will provide foolproof security. However, IBM, Lenovo, Huawei, HPE and Cisco which are among the most reliable server platforms also provide the greatest levels of inherent security. This enables customers to achieve the greatest economies of scale and safeguard their sensitive IP and data assets. That said, security is a 50/50 proposition. While vendors must deliver robust security, corporations are responsible for maintaining the reliability of their server and overarching network infrastructure. ITIC strongly advise businesses to:

  • Take inventory of all devices and applications across the ecosystem.
  • Conduct security vulnerability testing at least annually and work with third party experts.
  • Have a remediation and governance plan in place in the event your firm is successfully hacked.
  • Ensure that Security and IT professionals receive adequate training.
  • Ensure that end users as well as contract workers and temporary employees receive adequate security awareness training on the latest Email and Phishing scams and ransomware threats.
  • Implement strong security policies and procedures and enforce them.
  • Regularly replace, retrofit and refresh server hardware and server operating systems with the necessary patches, updates and security fixes as needed to maintain system health.
  • Keep up-to-date on the latest security patches and fixes.
  • Ensure that your firm’s hardware and software vendors and cloud vendors meet or exceed the terms of their Service Level Agreements (SLAs) for agreed upon security and reliability levels.

[1] “2020 Cost of a Data Breach Study,” IBM and the Ponemon Institute. URL: https://www.ibm.com/security/data-breach

 

[2] “2020 Cost of a Data Breach Study,” IBM and the Ponemon Institute. URL: https://www.ibm.com/security/data-breach

 

IBM, Lenovo and Huawei Servers Most Secure, Suffer Fewest Hacks As COVID-19 Data Breaches Surge Read More »

High Tech R&D in the COVID-19 Era is Crucial

https://www.technewsworld.com/story/86977.html

Maintaining and increasing research and development (R&D) spending in the COVID-19 era is critical for high technology vendors to deliver new solutions and services, continue to innovate and position their businesses to rebound from the negative effects of the global pandemic.

The COVID-19 global pandemic has been disastrous for business around the globe. The nouvel Corona virus has disrupted and continues to upend every aspect of corporate and personal daily life. Analysts and financial advisors/investors concur that wherever possible vendors should continue to aggressively invest in R&D. That is: spend money to make money. …

High Tech R&D in the COVID-19 Era is Crucial Read More »

ITIC Editorial Calendar

March/April 2020: ITIC 2020 Global Server Hardware and Server OS Reliability Survey

Description: Reliability and uptime are absolutely essential. Over 80% of corporations now require a minimum of 99.99% availability and greater; and an increasing number of enterprises now demand five nines – 99.999% or higher reliability. But which platforms actually deliver? This survey polls businesses on the reliability, uptime and management issues involving the inherent reliability of 14 different server hardware platforms and server operating system. The survey polls corporations on the frequency, the duration and reasons associated with Tier 1, Tier 2 and Tier 3 outages that occur on their core server OS and server hardware platforms. The results of this independent, non-vendor sponsored survey will provide businesses with the information they need to determine the TCO and ROI of their individual environments. The survey will also enable the server OS and server hardware vendors to see how their products rate among global users ranging from SMBs with as few as 25 people to the largest global enterprises with 100,000+ end users.

The 2020 ITIC Global Reliability Survey has also been updated and expanded to include questions on:

  • Component level failure data comparisons between IBM Power Servers and Intel-based x86 servers such as Dell, HP, Huawei, Lenovo and Cisco.
  • Percentage of component level failure data comparisons by vendor according to age (e.g. new to three months; three to six months; six months to 1 year; 1 to 2 years; 2 to 3 years; 3 to 4 years; 4 to 5 years; over five years).
  • Which component parts fail and frequency of failure
  • A percentage breakout of server parts failures for parts such as hard disk drives(HDD), processors, memory, power components, fans, or other
  • Where available, how the component failed. For example: memory multi-bit errors, HDD read failures, processor L1/L2 cache errors, etc.

 

April/May: 2020 Hourly Cost of Downtime

 

Description: Downtime impacts every aspect of the business. It can disrupt operations and end user productivity, result in data losses and raise the risk of litigation. Downtime can also result in lost business and irreparably damage a company’s reputation. The cost of downtime continues to increase as do the business risks. ITIC’s 2019 Hourly Cost of Downtime survey found an 85 % majority of organizations now require a minimum of 99.99% availability. This is the equivalent of 52 minutes of unplanned outages related to downtime for mission critical systems and applications or just 4.33 minutes of unplanned monthly outage for servers, applications and networks. This survey will once again poll corporations on how much one hour of downtime costs their business – exclusive of litigation, civil or criminal penalties. ITIC will also interview customers and vendors across 10 key vertical markets including: Banking/Finance; Education; Government; Healthcare; Manufacturing; Retail; Transportation and Utilities. The Report will focus on the toll that downtime extracts on the business, its IT departments, its employees, its business partners, suppliers and its external customers. This report will also examine the remediation efforts involved in resuming full operations as well as the lingering or after-effects to the corporation’s reputation as the result of an unplanned outage.

 

May/June 2020: ITIC Sexual Harassment, Gender Bias and Pay Equity Survey

 

Description:  ITIC’s “Sexual Harassment, Gender Bias and Pay Equity Gap,” independent Web survey polled 1,500 women professionals worldwide across 47 different industries, with a special emphasis on STEM disciplines. The survey focuses on three key areas of workplace discrimination: Sexual Harassment, Gender Bias and Unequal Pay.

 

 

July/August: 2020 IoT Deployment and Usage Trends Survey and Report

 

Description: The Internet of Things (IoT) has been one of the hottest emerging technologies of the last several years. This ITIC Report will present the findings of an ITIC survey that polls corporations on the business and technical challenges as well as the costs associated with IoT deployments. This IoT Report will also examine the ever present security risks associated with interconnected environments and ecosystems. ITIC’s IoT 2020 Deployment and Usage Trends Survey will also query global businesses on a variety of crucial issues related to their current and planned Internet of Things (IoT) usage and deployments such as how  they are using IoT (e.g. on-premises versus Network Edge/Perimeter deployments); the chief benefits and biggest challenges and impediments to IoT upgrades.  Vendors profiled for this report will include: AT&T, Bosch, Cisco, Dell, Fujitsu, General Electric (GE), Google, Hitachi, Huawei, IBM, Intel, Microsoft, Particle, PTC, Qualcomm,  Samsung, SAP, Siemens and Verizon.

 

 

August: ITIC 2020-2021 Security Trends

 

Description: Security, security, security! Security impacts every aspect of computing and networking operations in the Digital Age. And it’s never been more crucial as businesses, schools, government workers and consumers are working at home amidst the ongoing Nouvel and damaging security hack impacting the lives of millions of consumers and corporations. This Report will utilize the latest ITIC independent survey data to provide an overview of the latest trends in computer security including the latest and most dangerous hacks and what corporations can do to defend their data assets. Among the topics covered:

 

  • Security threats in the age of COVID-19
  • The most prevalent type of security hacks
  • The percentage of corporations that experienced a security hack
  • The duration of the security hack
  • The severity of the security hack
  • The cost of the security hack
  • Monetary losses experienced due to security breaches
  • Lost, damaged, destroyed or stolen data due to a security breach
  • The percentage of time that corporations spend securing their networks and data assets
  • Specific security policies and procedures companies are implementing
  • The issues that pose the biggest threats/risks to corporate security

 

 

 

August/September: ITIC 2020 Global Server Hardware Server OS Reliability Survey Mid-Year Update

 

Description: This Report is the Mid-year update of ITIC’s Annual Global Server Hardware, Server OS Reliability Survey. Each year ITIC conducts a second survey of selected questions from its Annual Reliability poll. ITIC also conducts new interviews with C-level executives and Network administrators to get detailed insights on the reliability of their server hardware and operating system software as well as the technical service and support they receive from their respective vendors.  ITIC will also incorporate updated PowerPoint slides and statistics to accompany the report.

 

October/November: AI, Machine Learning and Data Analytics Market Outlook

Description: This Report will examine the pivotal role that AI, Machine Learning and IoT-enabled predictive and prescriptive Analytics plays in assisting businesses sort through the data deluge to make informed decisions and derive real business value from their applications. AI and Machine Learning take Data Analytics to new levels. They can help businesses identify new product opportunities and also uncover hidden risks. Machine intelligence is already built into predictive and prescriptive analytics tools, speeding insights and enabling the analysis of vast probabilities to determine an optimal course of action or the best set of options. Over time, more sophisticated forms of AI will find their way into analytics systems, further improving the speed and accuracy of decision-making. Rather than querying a system and waiting for a response, the trend has been toward interactivity using visual interfaces. In the near future, voice interfaces will become more common, enabling humans to carry on interactive conversations with digital assistants while watching the analytical results on a screen. Analytics makes businesses more efficient; it enables them to cut costs and lower ongoing operational expenditures. It also helps them respond more quickly and agilely to changing market conditions – making them more competitive and thus driving top line revenue in both the near term and long term strategic sales. Vendors Profiled: AppDynamics, BMC, Cisco, IBM, Microsoft, Oracle, SAP and SAS. It also discusses how non-traditional vendors in the carrier and networking segments e.g. Dell/EMC, GE, Google, Verizon and Vodafone have fully embraced AIOps and analytics via partnerships, acquisitions and Research and Development (R&D) initiatives and have moved into this space and challenged the traditional market leaders. And it will provide an overview of the latest Mergers and Acquisitions (M&A) and their impact on the Analytics industry.

 

December: ITIC 2021 Technology and Business Outlook

 

Description: This Report will be based on ITIC survey results that poll IT administrators and C-level executives on a variety of forward looking business and technology issues for the 2020 timeframe. Topics covered will include: Security, IT staffing and budgets; application and network infrastructure upgrades; hardware and software purchasing trends and cloud computing.

Survey Methodology

 

ITIC conducts independent Web-based surveys that contain multiple choice and essay questions. In order to ensure the highest degree of accuracy, we employ authentication and tracking mechanisms to prohibit tampering with the survey results and to prohibit multiple votes by the same party. ITIC conducts surveys with corporate enterprises in North America and in over 25 countries worldwide across a wide range of vertical markets. Respondents range from SMBs with 25 to 100 workers to the largest multinational enterprises with over 100,000 employees. Each Report also includes two dozen first person customer interviews and where applicable, vendor and reseller interviews. The titles of the survey respondents include:

 

  • Network administrators
  • VPs of IT
  • Chief information officers (CIOs)
  • Chief technology officers (CTOs)
  • Chief executive officers (CEOs)
  • Chief Information Security Officers (CISOs)
  • Chief Marketing Officers (CMOs)
  • Consultants
  • Application developers
  • Database Administrators
  • Telecom Manager
  • Software Developer
  • System Administrator
  • IT Architect
  • Physical Plant Facilities Manager
  • Operations Manager
  • Technical Lead
  • Cloud Managers/Specialists
  • IoT Manager
  • Server Hardware/Virtualization Manager

 

 

ITIC welcomes input and suggestion from its vendor and enterprise clients with respect to surveys, survey questions and topics for its Editorial Calendar. If there are any particular topics or questions in a specific survey that you’d like to see covered, please let us know and we will do our best to address it.

 

 

About Information Technology Intelligence Corporation (ITIC)

 

ITIC, founded in 2002, is a research and consulting firm based in suburban Boston. It provides primary research on a wide variety of technology topics for vendors and enterprises. ITIC’s mission is to provide its clients with tactical, practical and actionable advice and to help clients make sense of the technology and business events that influence and impact their infrastructures and IT budgets. ITIC can provide your firm with accurate, objective research on a wide variety of technology topics within the network infrastructure: application software, server hardware, networking, virtualization, cloud computing, Internet of Things (IoT) and Security (e.g. ransom ware, cyber heists, phishing scams, botnets etc.). ITIC also addresses the business issues that impact the various technologies and influence the corporate business purchasing decisions. These include topics such as licensing and contract negotiation; GDPR; Intellectual Property (IP); patents, outsourcing, third party technical support and upgrade/migration planning.

 

For more information visit ITIC’s website at: www.itic-corp.com.

 

To purchase or license ITIC Reports and Survey data contact: Fred Abbott

Email: fhabbott@valleyviewventures.com;

Valley View Ventures, Inc.

Phone: 978-254-1639

www.valleyviewventures.com

ITIC Editorial Calendar Read More »

ITIC 2020 Reliability Poll: IBM, Lenovo, HPE, Huawei Mission Critical Servers Deliver Highest Uptime, Availability

For the 12th straight year, IBM’s Z mainframe and Power Systems, achieved the highest server; server operating system reliability and server application availability rankings, along with Lenovo’s ThinkSystem servers which delivered the best uptime among all Intel x 86 servers for the last seven consecutive years, in ITIC’s 2020 Global Server Hardware and Server OS Reliability survey.
ITIC’s latest independent survey data finds that the most reliable mainstream server platforms – the IBM Power Systems, Lenovo ThinkSystem, Hewlett-Packard Enterprise (HPE) and Huawei KunLun deliver up to 26x more uptime and availability than the least dependable unbranded “White box” servers.

The superior uptime of the above top ranked mission critical hardware makes them up to 34x more economical and cost effective than the least stable White box servers.

High end mission critical servers from IBM and Lenovo both registered under two (2) minutes of per server, per annum unplanned downtime due to inherent flaws in the underlying hardware or component parts. Cisco, Hewlett-Packard Enterprise (HPE) and Huawei server platforms were close behind: each recorded approximately two minutes or a few seconds more downtime attributable to inherent issues with the hardware. Among mainstream servers, IBM POWER8 and POWER9, along with the Lenovo x86 ThinkSystem servers; the HPE Integrity Superdome X and Huawei’s mission critical KunLun servers continue to deliver the highest levels of reliability/uptime among 18 server platforms. (See Exhibit 1).

The least consistent hardware – unbranded White box servers – averaged 53 minutes of unplanned per server downtime due to problems or failures with the server or its components (e.g. hard drive, memory, cooling systems etc.). This represents an increase of four (4) minutes of downtime compared with ITIC’s 2019 Global Server Hardware, Server OS Mid-Year Update survey.
ITIC’s independent Web-based survey polled over 1,200 businesses worldwide from November 2019 through March 2020. The study compares and analyzes the reliability and availability of over one dozen mainstream server platforms and one dozen operating system (OS) distributions. To obtain the most accurate and unbiased results, ITIC accepts no vendor sponsorship.

IBM’s System Z server is in a class of its own. It maintained its best in class rating among all server platforms. An 83% majority of IBM respondent organizations said their firms achieved five and six nines – 99.999% and 99.9999% – or greater uptime. Nine-in-10 IBM Z customers reported that the mainframe recorded just 0.62 seconds of unplanned per server downtime each month and 7.44 seconds annually due to inherent flaws in the server hardware or its component parts. Less than one-half of one percent of IBM Z respondents said the mainframe experienced unplanned outages exceeding four (4) hours of annual downtime.

The economic annual downtime cost comparisons among the top performing and the least reliable server hardware platforms is staggering.

A single hour of downtime estimated at $300,000, equates to $4,998 per server/per minute.

According to that metric, organizations using the most reliable IBM POWER8 and POWER9; Lenovo x86-based ThinkSystem; HPE Integrity or Huawei KunLun servers that experienced just under or just over two (2) minutes would spend $9,996 in annual per server downtime costs due to inherent flaws in server hardware or component parts (See Table 2).

By contrast, corporations using Dell PowerEdge servers which experienced 26 minutes of per server/per minute downtime at the same $300,000 per hourly downtime rate potentially would rack up yearly outage costs of $130,026 for a single server.

Corporations deploying the least reliable unbranded White box servers that registered 53 minutes of per server, per minute downtime can expect to incur possible downtime losses of $264,894 specifically related to server hardware flaws and bugs in the OS and applications. The four additional minutes of downtime – from 49 minutes per server in ITIC’s 2019 poll, to 53 minutes of per server outage time in 2020, represents a cost increase of $19,992 compared with the White box server 2019 per server, per minute downtime price tag of $244,902.

Time is money.

The higher monetary costs associated with unbranded White box servers are not surprising. The unbranded White box servers frequently incorporate inexpensive components. And some businesses recklessly run unsupported or pirated versions of operating systems and applications. The aforementioned hourly downtime examples are for just one server. Downtime costs can mount quickly and reach into the millions for corporations with dozens or hundreds of highly unreliable servers.

Survey Highlights

Among the other top survey findings:

• Reliability: IBM Power Systems and Lenovo ThinkSystem hardware and the Linux operating system distributions were once again either first or second in every reliability category, including server, virtualization and security.
• Availability: IBM Z mainframe, Power Systems, Lenovo ThinkSystem, HPE Integrity and Huawei KunLun all provided the highest levels of server, applications and service availability. That is, when the servers did experience an outage due to an inherent system flaw, they were of the shortest duration – typically one-to-five minutes.
• Technical Support: Businesses gave high marks to IBM, Lenovo, HPE, Huawei and Dell tech support. Only 1% of IBM and Lenovo customers and 2% of HPE and Huawei users gave those vendors “Poor” or “Unsatisfactory” customer support ratings.
• Hard Drive Failures Most Common Technical Server Flaw: Faulty hard drives are the chief culprits in inherent server reliability/quality issues (58%) followed by Motherboard issues (43%) and processor problems (38%).
• IBM, Lenovo and Huawei KunLun Servers Had Fewest Hard Drive Failures: IBM, Lenovo and Huawei’s KunLun platforms experienced the fewest hard drive quality or failure issues among all of the server distributions within the first one, two and three years of service. Less than one percent – 0.4% – of IBM Z mainframes, for example, experienced technical problems with their hard drives in the first year of usage, followed by the IBM Power Systems and Lenovo ThinkSystem with one percent (1%) each during the first 12 months of deployment.
• Security is Top External Issue Negatively Impacting Reliability: Security and data breaches now have the dubious distinction of being the top cause of downtime.
• Minimum Reliability Requirements Increase: An 88%majority of corporations now require a minimum of “four nines” of uptime – 99.99% for mission critical hardware, operating systems and main line of business (LOB) applications. This in an increase of five (5) percentage points from ITIC’s 2018 Reliability survey.
• Patch Time Increases: Seven-in-10 businesses now devote from one hour to over four hours applying patches. This is primarily due to a spike in wide ranging security issues such as Email Phishing scams, Ransomware, CEO fraud as well as malware and viruses.
• Increased Server Workloads Cause Reliability Declines: The survey data found that reliability declined in 67% of servers over four (4) years old, when corporations failed to retrofit or upgrade the hardware to accommodate increased workloads and larger, more compute intensive applications. This is up 23% from the 45% of businesses that said uptime declined due to higher workloads in the ITIC 2018 Reliability poll.
• Hourly Downtime Costs Rise: A 98% majority of firms say hourly downtime costs exceed $150,000 and 88% of respondents estimate hourly downtime expenses exceed $300,000. Just over one-third of ITIC survey respondents – 34% – estimate the cost of a single hour of downtime now tops one million ($1,000.000).

Server hardware, server operating system – and by extension, virtualization reliability, uptime and availability are the core foundational elements of the overarching health of an organization’s entire Digital Age ecosystem and the life blood of daily business operations.

The core reliability of corporate servers, server operating systems and the mission critical applications that run on them are absolutely imperative. The inherent reliability of enterprise hardware, OS and applications are necessary to maintain daily, uninterrupted business operations; ensure secure access to proprietary assets; mitigate risk and drive revenue.

ITIC 2020 Reliability Poll: IBM, Lenovo, HPE, Huawei Mission Critical Servers Deliver Highest Uptime, Availability Read More »

Scroll to Top