Year

KnowBe4 Survey: 64% of Corporate Users Say Security Awareness Training Stops Hacks

A new security survey finds that two-thirds of corporate users – 64% — assert that proactive Security Awareness Training helps their businesses to identify and thwart hacks immediately upon deployment. And, an 86% majority of corporations say Security Awareness Training (SAT) decreased overall security risks and educated employees to the ever-present danger posed by cyber security scams.

Those are the findings of the KnowBe4 “2018 Security Awareness Training Deployment and Trends Survey.”  This annual, independent Web-based survey polled 1,100 organizations worldwide during August and September 2018. The independent study conducted by KnowBe4, a Tampa, Florida-based maker of security training and phishing tools, queried corporations on the leading security threats and challenges facing their firms as cyber security attacks increase and intensify.

ITIC partnered with KnowBe4 on this study which also polled businesses on the initiatives they’re taking to more proactively combat the growing diversified and targeted cyber threats. The survey found that 88% of respondents currently deploy (SAT) tools. The businesses report that the training plays a pivotal role in identifying and thwarting attacks; minimizing risk and positively changing the employee culture.

Among the other top survey findings:

  • Social Engineering was the top cause of attacks, cited by 77% of respondents, followed by Malware (44%); User Error (27%) and a combination of the above (19%) and Password attacks (17%). (See Exhibit 1).
  • Some 84% of respondents said their businesses could quantify the decrease in successful Social Engineering attacks (e.g. Phishing scams, malware, Zero Day etc.) after deploying SAT to their end users after just a few simulated exercises. This is based on 700 anecdotal responses obtained from the Essay comments and first person interviews.
  • On average, respondents reported that Social Engineering cyber hacks like Phishing scams and Malware declined significantly from a success rate of 40% to 50% to zero to five percent after firms participated in several KnowBe4 SAT sessions.
  • Almost three-quarters – 71% of survey participants – indicate their businesses proactively conduct simulated Phishing attacks on a monthly, quarterly or weekly basis.
  • An overwhelming 96% of respondents affirmed that deploying SAT changed their firm’s computer security culture for the better, making everyone from C-level executives to knowledge workers more cognizant of cyber threats.

Introduction

In the 21st century Digital Age corporations can no longer practice security with 20/20 hindsight.

Complacency and ignorance regarding the security of the corporation’s data assets will almost certainly lead to disaster. Not a day goes by without a major new cyber hack reported.

Threats are everywhere. And no organization is immune.

Hackers are sophisticated, bold and hone in on specific targets. The hacks themselves are more prolific, pervasive and pernicious.

The current computing landscape includes virtualization, private, public and hybrid cloud computing, Machine Learning and the Internet of Things (IoT). These technologies are designed to facilitate faster, more efficient communication and better economies of scale by interconnecting machines, devices, applications and people.

The downside: increasing inter-connectivity among devices, applications and people produces a “target rich environment.”  Simply put, there are many more vulnerabilities and potential entry points into the corporate network. IT and security administrators have many more things to manage and they can’t possibly have eyes on everything. Oftentimes, the company’s end users pose the biggest security threat by unknowingly clicking on bad links. But even so-called “trusted” sources like supposedly secure third party service providers, business partners or even internal company executives can unwittingly be the weak links that enable surreptitious entry into the corporate networks.

The ubiquitous nature and myriad types of threats, further heightens security risks and significantly raises the danger that every organization – irrespective of size or vertical market – will be a target. The accelerated pace of new Cyber security heists via Social Engineering, (e.g. Phishing scams, malware, Password attacks, Zero Day, etc.), makes the IT Security administrator’s job extremely daunting.

Fortunately, there is help in the form of Security Awareness Training which immediately assists organizations in educating employees from the C-suite to the Mail room and transforming the corporate culture from one that is lax, to one that is alert and vigilant.

Data & Analysis

Computer and network security has all too often been practiced with 20/20 hindsight. That is, organizations have been lax in implementing and enforcing strong Computer Security Policies.

The KnowBe4 2018 Security Awareness Training Deployment and Trends Survey results indicate a majority of companies recognize the increasing danger posed by myriad pervasive and pernicious cyber threats. Businesses are also acutely aware that Security and IT managers and administrators cannot possibly have “eyes on everything,” as the size, scope and complexity of their respective infrastructures increases along with the number of interconnected people, devices, applications and systems.  Hence, companies are now proactively assuming responsibility for safeguarding their data.

SAT is a cost effective and expeditious mechanism for heightening user awareness — from the C-Suite to the average worker – of the multiple security threats facing organizations.

Among the other survey highlights:

  • Among businesses victimized by Social Engineering, some 70% of respondents cited Email as the root cause. This is mainly due to end users clicking without thinking and falling prey to a wide range of scams such as Phishing, malware and Zero Day hacks. Another 15% of respondents said they were “Unsure” which is extremely concerning.
  • An 88% majority of respondents currently employ Security Awareness Training Programs and six percent plan to install one within six months.
  • An 86% majority of Security Awareness Training Programs conduct simulated Phishing attacks and that same percentage – 86% – firms randomize their simulated Phishing attacks.
  • Some 71% of respondents that deploy KnowBe4’s Security Awareness Training said their firms had not been hacked in the last 12 months vs. 29% that said their companies were successfully penetrated (even for a short while before being detected and removed).
  • Survey respondents apply Security Awareness Training programs in a comprehensive manner to ensure the best possible outcomes. Asked to “select all” the mechanisms they use in their SAT programs: 74% said they use Email; 71% employ videos, 43% of businesses said they use Human Trainers; 36% send out Newsletters and 27% engage in seminars/Webinars with third parties.

Overall,  the results of the Web-based survey coupled with over two dozen first person interviews conducted by KnowBe4 and ITIC found that Security Awareness Training yields positive outcomes and delivers near immediate Return on Investment (ROI). Approximately two-thirds of the respondents indicated that the training helped their companies to identify and thwart security hacks within the last six months. The participants said security awareness training helped to alert their firms to a potential vulnerability  and allowed them to block the threat. And it also enabled security and IT administrators and users to recognize rogue code and quickly remove it before it could cause damage. Another 20% of those polled claimed their firms had not experienced any hacks in the last six months.

All in all, in this day and age of heightened security and cyber threats, organizations are well advised to proactively safeguard their organizations by implementing Security Awareness Training for their administrators and end users to defend their data assets. For more information, go to: www.knowbe4.com.

 

 

KnowBe4 Survey: 64% of Corporate Users Say Security Awareness Training Stops Hacks Read More »

ITIC Poll: Human Error and Security are Top Issues Negatively Impacting Reliability

Multiple issues contribute to the high reliability ratings among the various server hardware distributions.  ITIC’s 2018 Global Server Hardware, Server OS Reliability Mid-Year Update reveals that three issues in particular stand out as positively or negatively impacting reliability. They are: Human Error, Security and increased workloads.

ITIC’s 2018 Global Server Hardware, Server OS Reliability Mid Year Update polled over 800 customers worldwide from April through mid-July 2018. In order to obtain the most objective and unbiased results, ITIC accepted no vendor sponsorship for the Web-based survey.

Human Error and Security Are Biggest Reliability Threats

ITIC’s latest 2018 Reliability Mid Year update poll also chronicled the strain that external issues placed on organizations and their IT departments to ensure that the servers and operating systems deliver a high degree of reliability and availability.  As Exhibit 1 illustrates, human error and security (both from internal and external hacks) continue to rank as the chief culprits that cause unplanned downtime among servers, operating systems and applications for the fourth straight year.  After that, there is a drop off of 22 to 30 percentage points for the remaining issues ranked in the top five downtime causes. Both human error and reliability have had the dubious distinction of maintaining the top two factors precipitating unplanned downtime in the past five ITIC reliability polls.

Analysis

Reliability is a two-way street in which server hardware, OS and application vendors as well as corporate users both bear responsibility for the reliability of their systems and networks.

On the vendor side, there are obvious reasons why hardware makers like HPE, IBM and Lenovo mission critical servers consistently gain top reliability ratings. As ITIC noted in Part 1 of its reliability survey findings, the reliability gap between high end systems and inexpensive, commodity servers with basic features continue to grow. They include:

  • Research and Development (R&D) Vendors like Cisco, HPE, Huawei, IBM and Lenovo have made an ongoing commitment to research and development (R&D) and continually refresh/update their solutions.
  • RAS 2.0.The higher end servers incorporate the latest Reliability, Accessibility and Serviceability (RAS) 2.0 features/functions and are fine-tuned for manageability and security.
  • Price is not the top consideration. Businesses that purchase higher end mission critical and x86 systems like Fujitsu’s Primergy, HPE’s Integrity, Huawei’s KunLun, IBM Z and Power Systems and Lenovo System x want a best in class product offering, first and foremost. These corporations in verticals like banking/finance, government, healthcare, manufacturing, retail and utilities are more motivated with the historical ability of the vendor to act as a true responsive “partner” delivering a highly robust, leading edge hardware. They also want top-notch after market technical service and support, quick response to problems and fast, efficient access to patches and fixes.
  • More experienced IT Managers. In general, IT Managers, application developers, systems engineers and security professionals at corporations which purchase higher end servers from IBM, HPE, Lenovo, and Huawei tend to have more experience. The survey found that organizations that buy mission critical servers have IT and technical staff that boast approximately 12 to 13 years experience. By contrast, the average experience among IT managers and systems engineers at companies that purchase less expensive commodity based servers is about six years.

Highly experienced IT managers are more likely to spot problems before they become a major issue and lead to downtime and in the event of an outage. They are also more likely to perform faster remediation, accelerating the time it takes to identify the problem and get the servers and applications up and running faster than less experienced peers.

In an era of increasingly connected servers, systems, applications, networks and people, there are myriad factors that can potentially undercut reliability; they are:

  • Human Error and Security. To reiterate, these two factors constitute the top threats to reliability. ITIC does not anticipate this changing in the foreseeable future. Some 59% of respondents cited Human Error as their number one issue, followed by 51% that said Security problems caused downtime. And nearly two-thirds — 62% — of businesses indicated that their Security and IT administrators grapple with a near constant deluge of more pervasive and pernicious security threats. If the availability, reliability and access to servers, operating systems and mission critical main LOB applications is compromised or denied, end user productivity and business operations suffer immediate consequences.
  • Heavier, more data intensive workloads. The latest ITIC survey data finds that workloads have increased by 14% to 39% over the past 18 months.
  • A 60% majority of respondents say increased workloads negatively impact reliability; up 15% percentage points since 2017. Of that 60%, approximately 80% of firms experiencing reliability declines have commodity servers: e.g., White box; older Dell, HPE ProLiant and Oracle hardware >3 ½ years old that haven’t been retrofitted/upgraded.
  • Provisioning complex new applications that must integrate and interoperate with legacy systems and applications. Some 40% of survey respondents rate application deployment and provisioning as among their biggest challenges and one that can negatively impact reliability.
  • IT Departments Spending More Time Applying Patches. Some 54% of those polled indicated they are spending upwards of one hour to over four hours applying patches – especially security patches. Users said the security patches are large, time consuming and often complex, necessitating that they test and apply them manually. The percentage of firms automatically applying patches commensurately decreased from 30% in 2016 to just 9% in the latest 2018 poll. Overall, the latest ITIC survey shows that as of July 2018 companies are applying 27% more patches now than any time since 2015.
  • Deploying new technologies like Artificial Intelligence (AI), Big Data Analytics which require special expertise by IT managers and application developers as well as a high degree of compatibility and interoperability.
  • A rise in Internet of Things (IoT) and edge computing deployments which in turn, increase the number of connections that organizations and their IT departments must oversee and manage.
  • Seven-in-10 or 71%of survey respondents said aged hardware (3 ½+ years old) had a negative impact on server uptime and reliability compared with just 16% that said the older servers had not experienced any declines in reliability or availability. This is an increase of five percentage points from the 66% of those polled who responded positively to that survey question in the ITIC 2017 Reliability Survey and it’s a 27% increase from the 44% who said outmoded hardware negatively impacted uptime in the ITIC 2014 Reliability poll.

Corporations Minimum Reliability Requirements Rise

At the same time, corporations now require higher levels of reliability than they did even two o three years ago. The reliability and continuous operation of the core infrastructure and its component parts: server hardware, server operating system software, applications and other devices (e.g. firewalls, unified communications devices and uninterruptible power supply) are more crucial than ever to the organization’s bottom line.

It is clear that corporations – from the smallest companies with fewer than 25 people, to the largest multinational concerns with over one hundred thousand employees, are more risk averse and concerned about the potential risk for lawsuits and the damage to their reputation in the wake of an outage. ITIC’s survey data now indicates that an 84% majority of organizations now require a minimum of “four nines” – 99.99% reliability and uptime.

This is the equivalent of 52 minutes of unplanned outages related to downtime for mission critical systems and applications or just 4.33 minutes of unplanned monthly outage for servers, applications and networks.

Conclusions

The vendors are one-half of the equation. Corporate users also bear responsibility for the reliability of their servers and applications based on configuration, utilization, provisioning, management and security.

To minimize downtime and increase system and network availability it is imperative that corporations work with vendor partners to ensure that reliability and uptime are inherent features of all their servers, network connectivity devices, applications and mobile devices. This requires careful tactical and strategic planning to construct a solid strategy.

Human error and security are and will continue to pose the greatest threats to the underlying reliability and stability of server hardware, operating systems and applications. A key element of every firm’s reliability strategy and initiative is to obtain the necessary training and certification for IT managers, engineers and security professionals. Companies should also have their security professionals take security awareness training. Engaging the services of third party vendors to conduct security vulnerability testing to identify and eliminate potential vulnerabilities is also highly recommended.  Corporations must also deploy the appropriate Auditing, BI and network monitoring tools. Every 21st Century network environment needs continuous, comprehensive end-to-end monitoring for their complex, distributed applications in physical, virtual and cloud environments.

Ask yourself: “How much reliability does the infrastructure require and how much risk can the company safely tolerate?”

ITIC Poll: Human Error and Security are Top Issues Negatively Impacting Reliability Read More »

ITIC 2018 Server Reliability Mid-Year Update: IBM Z, IBM Power, Lenovo System x, HPE Integrity Superdome & Huawei KunLun Deliver Highest Uptime

August 8, 2018

For the tenth straight year, IBM and Lenovo servers again achieved top rankings in ITIC’s 2017 – 2018 Global Server Hardware and Server OS Reliability survey.

IBM’s Z Systems Enterprise server is in a class of its own. The IBM mainframe continues to exhibit peerless reliability besting all competitors. The Z recorded less than 10 seconds of unplanned per server downtime each month. Additionally less than one-half of one percent of all IBM Z customers reported unplanned outages that totaled greater than four (4) hours of system downtime in a single year.

Among mainstream servers, IBM Power Systems 7 and 8 and the Lenovo x86 X6 mission critical hardware consistently deliver the highest levels of reliability/uptime among 14 server hardware and 11 different mainstream server hardware virtualization platforms. Each platform averaged just 2.1 minutes of unplanned per annum/per server downtime (See Exhibit 1).

That makes the IBM Power Systems and Lenovo x 86 servers approximately 17 to 18 times more stable and available, than the least reliable distributions – the rival Oracle and HPE ProLiant servers.

Additionally, the latest ITIC survey results indicate just one percent of IBM Power Systems and Lenovo System x servers experienced over four (4) hours of unplanned annual downtime. This is the best showing among the 14 different server platforms surveyed.

ITIC’s 10th annual independent ITIC 2017 – 2018 Global Server Hardware and Server OS Reliability survey polled 800 organizations worldwide from August through December 2017.  In order to obtain the most accurate and unbiased results, ITIC accepted no vendor sponsorship. …

ITIC 2018 Server Reliability Mid-Year Update: IBM Z, IBM Power, Lenovo System x, HPE Integrity Superdome & Huawei KunLun Deliver Highest Uptime Read More »

IBM, Lenovo Servers Deliver Top Reliability, Cisco UCS, HPE Integrity Gain

 IBM z Systems Enterprise; IBM Power Systems Servers Most Reliable for Ninth Straight Year;  Lenovo x86 Servers Deliver Highest Uptime/Availability among all Intel x86-based Systems

For the ninth year in a row, corporate enterprise users said IBM’s z Systems Enterprise mainframe class server achieved near flawless reliability, recording less than 10 seconds of unplanned per server downtime each month. Among mainstream servers,  IBM Power Systems devices and the Lenovo x86 platform delivered the highest levels of reliability/uptime among 14 server hardware and 11 different server hardware virtualization platforms.

Those are the results of the ITIC 2017 Global Server Hardware and Server OS Reliability survey which polled 750 organizations worldwide during April/May 2017.

Among the top survey findings:

  • IBM z Systems Enterprise mainframe class systems, had the lowest incident – 0% — of > 4 hours of per server/per annum downtime of any hardware platform. Specifically, IBM z Systems mainframe class servers exhibit true mainframe fault tolerance experiencing just 0.96 minutes of   of unplanned per server annual downtime. That equates to 8 seconds per month or “blink and you miss it,” 2 seconds of unplanned weekly downtime. This is an improvement over the 1.12 minutes of per server/per annum downtime the z Systems servers recorded in ITIC’s 2016 – 2017 Reliability poll nine months ago.
  • Among mainstream hardware platforms, IBM Power Systems and Lenovo System x running Linux have least amount of unplanned downtime 2.5 and 2.8 minutes per server/per year of any mainstream Linux server platforms.
  • 88% of IBM Power Systems and 87% of Lenovo System x users running RHEL, SuSE or Ubuntu Linux experience fewer than one unplanned outage per server, per year.
  • Tenly two percent of IBM and Lenovo servers recorded >4 hours of unplanned per server/per annum downtime; followed by six percent of HPE servers; eight percent of Dell servers and 10% of Oracle servers.
  • IBM and Lenovo hardware and the Linux operating system distributions were either first or second in every reliability category, including virtualization and security.
  • Lenovo x86 servers achieved the highest reliability ratings among all competing x86 platforms
  • Lenovo Takes Top Marks for Technical Service and Support: Lenovo tech support the best followed by Cisco and IBM
  • Some 66% of survey respondents said aged hardware (3 ½+ years old) had a negative impact on server uptime and reliability vs. 21% that said it has not impacted reliability/uptime. This is 22% increase from the 44% who said outmoded hardware negatively impacted uptime in 2014
  • Reliability continues to decline for the fifth year in a row on the HP ProLiant and Oracle’s SPARC & x86 hardware and Solaris OS. Reliability on the Oracle platforms declined slightly mainly due to aging. Many Oracle hardware customers are eschewing upgrades, opting instead to migrate to rival platforms.
  • Some 16% of Oracle customers rated service & support as Poor or Unsatisfactory. Dissatisfaction with Oracle licensing and pricing policies remains consistently high for the last three years.
  • Only 1% of Cisco, 1% of Dell, 1% of IBM and Lenovo, 3% of HP, 3% of Fujitsu and 4% of Toshiba users gave those vendors “Poor” or “Unsatisfactory” customer support ratings.

IBM, Lenovo Servers Deliver Top Reliability, Cisco UCS, HPE Integrity Gain Read More »

Hourly Downtime Tops $300K for 81% of Firms; 33% of Enterprises Say Downtime Costs >$1M

The cost of downtime continues to increase as do the business risks. An 81% majority of organizations now require a minimum of 99.99% availability. This is the equivalent of 52 minutes of unplanned outages related to downtime for mission critical systems and applications or ,just 4.33 minutes of unplanned monthly outage for servers, applications and networks.                                         

Over 98% of large enterprises with more than 1,000 employees say that on average, a single hour of downtime per year costs their company over $100,000, while an 81% of organizations report that the cost exceeds $300,000. Even more significantly: three in 10 enterprises – 33% – indicate that hourly downtime costs their firms $1 million or more (See Exhibit 1). It’s important to note that these statistics represent the “average” hourly cost of downtime.  In a worst case scenario – if any device or application becomes unavailable for any reason the monetary losses to the organization can reach millions per minute. Devices, applications and networks can become unavailable for myriad reasons. These include: natural and man-made catastrophes; faulty hardware; bugs in the application; security flaws or hacks and human error. Business-related issues, such as a Regulatory Compliance related inspection or litigation, can also force the organization to shutter its operations. For whatever the reason, when the network and its systems are unavailable, productivity grinds to a halt and business ceases.

Highly regulated vertical industries like Banking and Finance, Food, Government, Healthcare, Hospitality, Hotels, Manufacturing, Media and Communications, Retail, Transportation and Utilities must also factor in the potential losses related to litigation as well as civil penalties stemming from organizations’ failure to meet Service Level Agreements (SLAs) or Compliance Regulations. Moreover, for a select three percent of organizations, whose businesses are based on high level data transactions, like banks and stock exchanges, online retail sales or even utility firms, losses may be calculated in millions of dollars per minute. …

Hourly Downtime Tops $300K for 81% of Firms; 33% of Enterprises Say Downtime Costs >$1M Read More »

Cost of Hourly Downtime Soars: 81% of Enterprises Say it Exceeds $300K On Average

The only good downtime is no downtime.

ITIC’s latest survey data finds that 98% of organizations say a single hour of downtime costs over $100,000; 81% of respondents indicated that 60 minutes of downtime costs their business over $300,000. And a record one-third or 33% of enterprises report that one hour of downtime costs their firms $1 million to over $5 million.

For the fourth straight year, ITIC’s independent survey data indicates that the cost of hourly downtime has increased. The average cost of a single hour of unplanned downtime has risen by 25% to 30% rising since 2008 when ITIC first began tracking these figures.

In ITIC’s 2013 – 2014 survey, just three years ago, 95% of respondents indicated that a single hour of downtime cost their company $100,000.  However, just over 50% said the cost exceeded $300,000 and only one in 10 enterprises reported hourly downtime costs their firms $1million or more. In ITIC’s latest poll three-in-10 businesses or 33% of survey respondents said that hourly downtime costs top $1 million or even $5 million.

Keep in mind that these are “average” hourly downtime costs. In certain use case scenarios — such as the financial services industry or stock transactions the downtime costs can conceivably exceed millions per minute. Additionally, an outage that occur in peak usage hours may also cost the business more than the average figures cited here. …

Cost of Hourly Downtime Soars: 81% of Enterprises Say it Exceeds $300K On Average Read More »

IBM z13s Delivers Power, Performance, Fault Tolerant Reliability and Security for Hybrid Clouds

Security. Reliability. Performance. Analytics. Services.

These are the most crucial considerations for corporate enterprises in choosing a hardware platform. The underlying server hardware functions as the foundational element for the business’ entire infrastructure and interconnected environment. Today’s 21st century Digital Age networks are characterized by increasingly demand-intensive workloads; the need to use Big Data analytics to analyze and interpret the massive volumes and variety of data to make proactive decisions and keep the business competitive. Security is a top priority. It’s absolutely essential to safeguard sensitive data and Intellectual Property (IP) from sophisticated, organized external hackers and defend against threats posed by internal employees.

The latest IBM z13s enterprise server delivers embedded security, state-of-the-art analytics and unparalleled reliability, performance and throughput. It is fine tuned for hybrid cloud environments. And it’s especially useful as a secure foundational element in Internet of Things (IoT) deployments. The newly announced, z13s is highly robust: it supports the most compute-intensive workloads in hybrid cloud and on-premises environments. The newest member of the z Systems family, the z13s, incorporates advanced, embedded cryptography features in the hardware that allow it to encrypt and decrypt data twice as fast as previous generations, with no reduction in transactional throughput owing to the updated cryptographic coprocessor for every chip core and tamper-resistant hardware-accelerated cryptographic coprocessor cards. …

IBM z13s Delivers Power, Performance, Fault Tolerant Reliability and Security for Hybrid Clouds Read More »

IBM, Lenovo Top ITIC 2016 Reliability Poll; Cisco Comes on Strong

IBM Power Systems Servers Most Reliable for Seventh Straight Year; Lenovo x86 Servers Deliver Highest Uptime/Availability among all Intel x86-based Systems; Cisco UCS Stays Strong; Dell Reliability Ratchets Up; Intel Xeon Processor E7 v3 chips incorporate advanced analytics; significantly boost reliability of x86-based servers

In 2016 and beyond, infrastructure reliability is more essential than ever.

The overall health of network operations, applications, management and security functions all depend on the core foundational elements: server hardware, server operating systems and virtualization to deliver high availability, robust management and solid security. The reliability of the server, server OS and virtualization platforms are the cornerstones of the entire network infrastructure. The individual and collective reliability of these platforms have a direct, immediate and long lasting impact on daily operations and business results. For the seventh year in a row, corporate enterprise users said IBM server hardware delivered the highest levels of reliability/uptime among 14 server hardware and 11 different server hardware virtualization platforms. A 61% majority of IBM Power Systems servers and Lenovo System x servers achieved “five nines” or 99.999% availability – the equivalent of 5.25 minutes of unplanned per server /per annum downtime compared to 46% of Hewlett-Packard servers and 40% of Oracle server hardware. …

IBM, Lenovo Top ITIC 2016 Reliability Poll; Cisco Comes on Strong Read More »

Parallels Access 2.0 Adds Android Support, Lowers Pricing

Parallels Access 2.0 remote desktop application for Android and iOS tablets and smart phones is a “must have” for anyone that needs seamless, efficient remote access to PC and Mac desktop applications from Android and iOS smart phones and tablets.

Desktops to Go

Parallels, a well established and respected vendor in the remote desktop access arena for the Apple Mac, iPhone and iPad market has upped its game with the 2.0 release of its Parallels Access application (www.parallels.com/access). The newest version of the remote access package now supports Android phones and tablets. It also delivers a slew of new features for a more improved and seamless remote access experience.

At the same time, Parallels also lowered the retail pricing on the product. Parallels Access 2.0 now lists for $19.99 annually or $34.99 for two years, for individual users (with up to five computers). And finally, the company introduced Parallels Access for Business (www.parallels.com/access-business) which enables organizations to centrally assign, manage, and secure remote access to their computers. …

Parallels Access 2.0 Adds Android Support, Lowers Pricing Read More »

IBM z/OS, IBM AIX, Debian and Ubuntu Score Highest Security Ratings

Eight out of 10 — 82% — of the over 600 respondents to ITIC’s 2014-2015 Global Server Hardware and Server OS Reliability survey say security issues negatively impact overall server, operating system and network reliability. Of that figure a 53% majority of those polled say that security vulnerabilities and hacks have a “moderate,” “significant” or “crucial impact on network availability and uptime (See Exhibit 1).

Overall, the latest ITIC survey results showed that organizations are still more reactive than proactive regarding security threats. Some 15% of the over 600 global corporate respondents are extremely lax: some seven percent said that security issues have no impact on their environment while another eight percent indicated that they don’t keep track of whether or not security issues negatively affect the uptime and availability of their networks. In contrast, 24% of survey participants or one-in-four said security has a “significant” or “crucial” negative impact on network reliability and performance.

Still, despite the well documented and high profile hacks into companies like Target, eBay, Google and other big name vendors this year, the survey found that seven-out-of-10 firms – 70% – are generally confident in the security of their hardware, software and applications – until they get hacked. …

IBM z/OS, IBM AIX, Debian and Ubuntu Score Highest Security Ratings Read More »

Scroll to Top