2018

KnowBe4 Survey: 64% of Corporate Users Say Security Awareness Training Stops Hacks

A new security survey finds that two-thirds of corporate users – 64% — assert that proactive Security Awareness Training helps their businesses to identify and thwart hacks immediately upon deployment. And, an 86% majority of corporations say Security Awareness Training (SAT) decreased overall security risks and educated employees to the ever-present danger posed by cyber security scams.

Those are the findings of the KnowBe4 “2018 Security Awareness Training Deployment and Trends Survey.”  This annual, independent Web-based survey polled 1,100 organizations worldwide during August and September 2018. The independent study conducted by KnowBe4, a Tampa, Florida-based maker of security training and phishing tools, queried corporations on the leading security threats and challenges facing their firms as cyber security attacks increase and intensify.

ITIC partnered with KnowBe4 on this study which also polled businesses on the initiatives they’re taking to more proactively combat the growing diversified and targeted cyber threats. The survey found that 88% of respondents currently deploy (SAT) tools. The businesses report that the training plays a pivotal role in identifying and thwarting attacks; minimizing risk and positively changing the employee culture.

Among the other top survey findings:

  • Social Engineering was the top cause of attacks, cited by 77% of respondents, followed by Malware (44%); User Error (27%) and a combination of the above (19%) and Password attacks (17%). (See Exhibit 1).
  • Some 84% of respondents said their businesses could quantify the decrease in successful Social Engineering attacks (e.g. Phishing scams, malware, Zero Day etc.) after deploying SAT to their end users after just a few simulated exercises. This is based on 700 anecdotal responses obtained from the Essay comments and first person interviews.
  • On average, respondents reported that Social Engineering cyber hacks like Phishing scams and Malware declined significantly from a success rate of 40% to 50% to zero to five percent after firms participated in several KnowBe4 SAT sessions.
  • Almost three-quarters – 71% of survey participants – indicate their businesses proactively conduct simulated Phishing attacks on a monthly, quarterly or weekly basis.
  • An overwhelming 96% of respondents affirmed that deploying SAT changed their firm’s computer security culture for the better, making everyone from C-level executives to knowledge workers more cognizant of cyber threats.

Introduction

In the 21st century Digital Age corporations can no longer practice security with 20/20 hindsight.

Complacency and ignorance regarding the security of the corporation’s data assets will almost certainly lead to disaster. Not a day goes by without a major new cyber hack reported.

Threats are everywhere. And no organization is immune.

Hackers are sophisticated, bold and hone in on specific targets. The hacks themselves are more prolific, pervasive and pernicious.

The current computing landscape includes virtualization, private, public and hybrid cloud computing, Machine Learning and the Internet of Things (IoT). These technologies are designed to facilitate faster, more efficient communication and better economies of scale by interconnecting machines, devices, applications and people.

The downside: increasing inter-connectivity among devices, applications and people produces a “target rich environment.”  Simply put, there are many more vulnerabilities and potential entry points into the corporate network. IT and security administrators have many more things to manage and they can’t possibly have eyes on everything. Oftentimes, the company’s end users pose the biggest security threat by unknowingly clicking on bad links. But even so-called “trusted” sources like supposedly secure third party service providers, business partners or even internal company executives can unwittingly be the weak links that enable surreptitious entry into the corporate networks.

The ubiquitous nature and myriad types of threats, further heightens security risks and significantly raises the danger that every organization – irrespective of size or vertical market – will be a target. The accelerated pace of new Cyber security heists via Social Engineering, (e.g. Phishing scams, malware, Password attacks, Zero Day, etc.), makes the IT Security administrator’s job extremely daunting.

Fortunately, there is help in the form of Security Awareness Training which immediately assists organizations in educating employees from the C-suite to the Mail room and transforming the corporate culture from one that is lax, to one that is alert and vigilant.

Data & Analysis

Computer and network security has all too often been practiced with 20/20 hindsight. That is, organizations have been lax in implementing and enforcing strong Computer Security Policies.

The KnowBe4 2018 Security Awareness Training Deployment and Trends Survey results indicate a majority of companies recognize the increasing danger posed by myriad pervasive and pernicious cyber threats. Businesses are also acutely aware that Security and IT managers and administrators cannot possibly have “eyes on everything,” as the size, scope and complexity of their respective infrastructures increases along with the number of interconnected people, devices, applications and systems.  Hence, companies are now proactively assuming responsibility for safeguarding their data.

SAT is a cost effective and expeditious mechanism for heightening user awareness — from the C-Suite to the average worker – of the multiple security threats facing organizations.

Among the other survey highlights:

  • Among businesses victimized by Social Engineering, some 70% of respondents cited Email as the root cause. This is mainly due to end users clicking without thinking and falling prey to a wide range of scams such as Phishing, malware and Zero Day hacks. Another 15% of respondents said they were “Unsure” which is extremely concerning.
  • An 88% majority of respondents currently employ Security Awareness Training Programs and six percent plan to install one within six months.
  • An 86% majority of Security Awareness Training Programs conduct simulated Phishing attacks and that same percentage – 86% – firms randomize their simulated Phishing attacks.
  • Some 71% of respondents that deploy KnowBe4’s Security Awareness Training said their firms had not been hacked in the last 12 months vs. 29% that said their companies were successfully penetrated (even for a short while before being detected and removed).
  • Survey respondents apply Security Awareness Training programs in a comprehensive manner to ensure the best possible outcomes. Asked to “select all” the mechanisms they use in their SAT programs: 74% said they use Email; 71% employ videos, 43% of businesses said they use Human Trainers; 36% send out Newsletters and 27% engage in seminars/Webinars with third parties.

Overall,  the results of the Web-based survey coupled with over two dozen first person interviews conducted by KnowBe4 and ITIC found that Security Awareness Training yields positive outcomes and delivers near immediate Return on Investment (ROI). Approximately two-thirds of the respondents indicated that the training helped their companies to identify and thwart security hacks within the last six months. The participants said security awareness training helped to alert their firms to a potential vulnerability  and allowed them to block the threat. And it also enabled security and IT administrators and users to recognize rogue code and quickly remove it before it could cause damage. Another 20% of those polled claimed their firms had not experienced any hacks in the last six months.

All in all, in this day and age of heightened security and cyber threats, organizations are well advised to proactively safeguard their organizations by implementing Security Awareness Training for their administrators and end users to defend their data assets. For more information, go to: www.knowbe4.com.

 

 

KnowBe4 Survey: 64% of Corporate Users Say Security Awareness Training Stops Hacks Read More »

ITIC Poll: Human Error and Security are Top Issues Negatively Impacting Reliability

Multiple issues contribute to the high reliability ratings among the various server hardware distributions.  ITIC’s 2018 Global Server Hardware, Server OS Reliability Mid-Year Update reveals that three issues in particular stand out as positively or negatively impacting reliability. They are: Human Error, Security and increased workloads.

ITIC’s 2018 Global Server Hardware, Server OS Reliability Mid Year Update polled over 800 customers worldwide from April through mid-July 2018. In order to obtain the most objective and unbiased results, ITIC accepted no vendor sponsorship for the Web-based survey.

Human Error and Security Are Biggest Reliability Threats

ITIC’s latest 2018 Reliability Mid Year update poll also chronicled the strain that external issues placed on organizations and their IT departments to ensure that the servers and operating systems deliver a high degree of reliability and availability.  As Exhibit 1 illustrates, human error and security (both from internal and external hacks) continue to rank as the chief culprits that cause unplanned downtime among servers, operating systems and applications for the fourth straight year.  After that, there is a drop off of 22 to 30 percentage points for the remaining issues ranked in the top five downtime causes. Both human error and reliability have had the dubious distinction of maintaining the top two factors precipitating unplanned downtime in the past five ITIC reliability polls.

Analysis

Reliability is a two-way street in which server hardware, OS and application vendors as well as corporate users both bear responsibility for the reliability of their systems and networks.

On the vendor side, there are obvious reasons why hardware makers like HPE, IBM and Lenovo mission critical servers consistently gain top reliability ratings. As ITIC noted in Part 1 of its reliability survey findings, the reliability gap between high end systems and inexpensive, commodity servers with basic features continue to grow. They include:

  • Research and Development (R&D) Vendors like Cisco, HPE, Huawei, IBM and Lenovo have made an ongoing commitment to research and development (R&D) and continually refresh/update their solutions.
  • RAS 2.0.The higher end servers incorporate the latest Reliability, Accessibility and Serviceability (RAS) 2.0 features/functions and are fine-tuned for manageability and security.
  • Price is not the top consideration. Businesses that purchase higher end mission critical and x86 systems like Fujitsu’s Primergy, HPE’s Integrity, Huawei’s KunLun, IBM Z and Power Systems and Lenovo System x want a best in class product offering, first and foremost. These corporations in verticals like banking/finance, government, healthcare, manufacturing, retail and utilities are more motivated with the historical ability of the vendor to act as a true responsive “partner” delivering a highly robust, leading edge hardware. They also want top-notch after market technical service and support, quick response to problems and fast, efficient access to patches and fixes.
  • More experienced IT Managers. In general, IT Managers, application developers, systems engineers and security professionals at corporations which purchase higher end servers from IBM, HPE, Lenovo, and Huawei tend to have more experience. The survey found that organizations that buy mission critical servers have IT and technical staff that boast approximately 12 to 13 years experience. By contrast, the average experience among IT managers and systems engineers at companies that purchase less expensive commodity based servers is about six years.

Highly experienced IT managers are more likely to spot problems before they become a major issue and lead to downtime and in the event of an outage. They are also more likely to perform faster remediation, accelerating the time it takes to identify the problem and get the servers and applications up and running faster than less experienced peers.

In an era of increasingly connected servers, systems, applications, networks and people, there are myriad factors that can potentially undercut reliability; they are:

  • Human Error and Security. To reiterate, these two factors constitute the top threats to reliability. ITIC does not anticipate this changing in the foreseeable future. Some 59% of respondents cited Human Error as their number one issue, followed by 51% that said Security problems caused downtime. And nearly two-thirds — 62% — of businesses indicated that their Security and IT administrators grapple with a near constant deluge of more pervasive and pernicious security threats. If the availability, reliability and access to servers, operating systems and mission critical main LOB applications is compromised or denied, end user productivity and business operations suffer immediate consequences.
  • Heavier, more data intensive workloads. The latest ITIC survey data finds that workloads have increased by 14% to 39% over the past 18 months.
  • A 60% majority of respondents say increased workloads negatively impact reliability; up 15% percentage points since 2017. Of that 60%, approximately 80% of firms experiencing reliability declines have commodity servers: e.g., White box; older Dell, HPE ProLiant and Oracle hardware >3 ½ years old that haven’t been retrofitted/upgraded.
  • Provisioning complex new applications that must integrate and interoperate with legacy systems and applications. Some 40% of survey respondents rate application deployment and provisioning as among their biggest challenges and one that can negatively impact reliability.
  • IT Departments Spending More Time Applying Patches. Some 54% of those polled indicated they are spending upwards of one hour to over four hours applying patches – especially security patches. Users said the security patches are large, time consuming and often complex, necessitating that they test and apply them manually. The percentage of firms automatically applying patches commensurately decreased from 30% in 2016 to just 9% in the latest 2018 poll. Overall, the latest ITIC survey shows that as of July 2018 companies are applying 27% more patches now than any time since 2015.
  • Deploying new technologies like Artificial Intelligence (AI), Big Data Analytics which require special expertise by IT managers and application developers as well as a high degree of compatibility and interoperability.
  • A rise in Internet of Things (IoT) and edge computing deployments which in turn, increase the number of connections that organizations and their IT departments must oversee and manage.
  • Seven-in-10 or 71%of survey respondents said aged hardware (3 ½+ years old) had a negative impact on server uptime and reliability compared with just 16% that said the older servers had not experienced any declines in reliability or availability. This is an increase of five percentage points from the 66% of those polled who responded positively to that survey question in the ITIC 2017 Reliability Survey and it’s a 27% increase from the 44% who said outmoded hardware negatively impacted uptime in the ITIC 2014 Reliability poll.

Corporations Minimum Reliability Requirements Rise

At the same time, corporations now require higher levels of reliability than they did even two o three years ago. The reliability and continuous operation of the core infrastructure and its component parts: server hardware, server operating system software, applications and other devices (e.g. firewalls, unified communications devices and uninterruptible power supply) are more crucial than ever to the organization’s bottom line.

It is clear that corporations – from the smallest companies with fewer than 25 people, to the largest multinational concerns with over one hundred thousand employees, are more risk averse and concerned about the potential risk for lawsuits and the damage to their reputation in the wake of an outage. ITIC’s survey data now indicates that an 84% majority of organizations now require a minimum of “four nines” – 99.99% reliability and uptime.

This is the equivalent of 52 minutes of unplanned outages related to downtime for mission critical systems and applications or just 4.33 minutes of unplanned monthly outage for servers, applications and networks.

Conclusions

The vendors are one-half of the equation. Corporate users also bear responsibility for the reliability of their servers and applications based on configuration, utilization, provisioning, management and security.

To minimize downtime and increase system and network availability it is imperative that corporations work with vendor partners to ensure that reliability and uptime are inherent features of all their servers, network connectivity devices, applications and mobile devices. This requires careful tactical and strategic planning to construct a solid strategy.

Human error and security are and will continue to pose the greatest threats to the underlying reliability and stability of server hardware, operating systems and applications. A key element of every firm’s reliability strategy and initiative is to obtain the necessary training and certification for IT managers, engineers and security professionals. Companies should also have their security professionals take security awareness training. Engaging the services of third party vendors to conduct security vulnerability testing to identify and eliminate potential vulnerabilities is also highly recommended.  Corporations must also deploy the appropriate Auditing, BI and network monitoring tools. Every 21st Century network environment needs continuous, comprehensive end-to-end monitoring for their complex, distributed applications in physical, virtual and cloud environments.

Ask yourself: “How much reliability does the infrastructure require and how much risk can the company safely tolerate?”

ITIC Poll: Human Error and Security are Top Issues Negatively Impacting Reliability Read More »

ITIC 2018 Server Reliability Mid-Year Update: IBM Z, IBM Power, Lenovo System x, HPE Integrity Superdome & Huawei KunLun Deliver Highest Uptime

August 8, 2018

For the tenth straight year, IBM and Lenovo servers again achieved top rankings in ITIC’s 2017 – 2018 Global Server Hardware and Server OS Reliability survey.

IBM’s Z Systems Enterprise server is in a class of its own. The IBM mainframe continues to exhibit peerless reliability besting all competitors. The Z recorded less than 10 seconds of unplanned per server downtime each month. Additionally less than one-half of one percent of all IBM Z customers reported unplanned outages that totaled greater than four (4) hours of system downtime in a single year.

Among mainstream servers, IBM Power Systems 7 and 8 and the Lenovo x86 X6 mission critical hardware consistently deliver the highest levels of reliability/uptime among 14 server hardware and 11 different mainstream server hardware virtualization platforms. Each platform averaged just 2.1 minutes of unplanned per annum/per server downtime (See Exhibit 1).

That makes the IBM Power Systems and Lenovo x 86 servers approximately 17 to 18 times more stable and available, than the least reliable distributions – the rival Oracle and HPE ProLiant servers.

Additionally, the latest ITIC survey results indicate just one percent of IBM Power Systems and Lenovo System x servers experienced over four (4) hours of unplanned annual downtime. This is the best showing among the 14 different server platforms surveyed.

ITIC’s 10th annual independent ITIC 2017 – 2018 Global Server Hardware and Server OS Reliability survey polled 800 organizations worldwide from August through December 2017.  In order to obtain the most accurate and unbiased results, ITIC accepted no vendor sponsorship. …

ITIC 2018 Server Reliability Mid-Year Update: IBM Z, IBM Power, Lenovo System x, HPE Integrity Superdome & Huawei KunLun Deliver Highest Uptime Read More »

Scroll to Top