big data

The Cloud Gets Crowded and more Competitive

The cloud is getting crowded.

In 2022 the cloud computing market – particularly the hybrid cloud – is hotter and more competitive than ever.

Corporate enterprises are flocking to the cloud as a way to offload onerous IT administrative tasks and more easily and efficiently manage increasingly complex infrastructure, storage and security. Migrating operations from the data center to the cloud can also greatly reduce their operational and capital expenditure costs.

Cloud vendors led by market leaders like Amazon Web Services (AWS), Microsoft Azure, Google Cloud, IBM Cloud, Oracle Cloud Infrastructure, SAP, Salesforce, Rackspace Cloud, and VMware, as well as China’s Alibaba and Huawei Cloud, are all racing to meet demand. The current accelerated shift to the cloud was fueled by the COVID-19 global pandemic which created supply chain disruptions and upended many aspects of traditional work life. Since 2020, government agencies, commercial businesses and schools shifted to remote working and learning. Although COVID is generally waning (albeit with continuing flare-ups), a hybrid work environment is the new normal. This in turn, makes a compelling business case for furthering cloud migrations.

In 2022, more than $1.3 trillion in enterprise IT spending is at stake from the shift to cloud, and that revenue will increase to nearly $1.8 trillion by 2025 according to the February 2022 report “Market Impact: Cloud Shift – 2022 Through 2025” by Gartner, Inc. in Stamford, Conn.  Furthermore, Gartner’s latest research forecasts that enterprise IT spending on public cloud computing, within addressable market segments, will outpace traditional IT spending in 2025.

Hottest cloud trends in 2022

Hybrid Clouds

Hybrid cloud is exactly what its name implies: it’s a combination of public, private and dedicated on-premises datacenter infrastructure and applications. Companies can adopt a hybrid approach for specific use cases and applications – outsourcing some portions of their operations to a hosted cloud environment, while keeping others onsite. This approach lets companies continue to leverage and maintain their legacy data infrastructure as they migrate to the cloud.

Cloud security and compliance: There is no such thing as too much security. ITIC’s 2022 Global Server Hardware Security survey indicates that businesses experienced an 84% surge in security incidents like ransomware, email phishing scams and targeted data breaches over the last two years that were especially prevalent and commonplace. The hackers are extremely sophisticated; they choose their targets with great precision with the intent to inflict maximum damage and net the biggest payback. This trend shows no signs of abating. In 2021, the average cost of a successful data breach increased to $4.24 million (USD); this is a 10% increase from $3.86 million in 2020 according to the 2021 Cost of a Data Breach Study, jointly conducted by IBM and the Ponemon Institute. The $4.24 million average cost of a single data breach is the highest number in the 17 years since IBM and Ponemon began conducting the survey. It represents an increase of 10% in the last 12 months and 20% over the last two years. Not surprisingly, in 2021, 61% of malware directed at enterprises targeted remote employees via cloud applications. Any security breach will have a domino effect on regulatory compliance. In response, cloud vendors are doubling down on security capabilities and compliance certifications. There is now a groundswell of demand for Secure Access Service Edge (SASE) cloud security architecture designed to safeguard, monitor and access connectivity among myriad cloud applications services, as well as datacenter IT infrastructure and end user devices. SASE gives users a single sign-on capability across multiple cloud applications while ensuring compliance.

Cloud-based disaster recovery (DR): The ongoing concerns around security and compliance issues has also shone the spotlight on the importance of cloud-based disaster recovery. DR uses cloud computing to back up data and continue to run the necessary business processes in case of disaster. Organizations can utilize cloud-based DR for load balancing and to replicate cloud services across multiple cloud environments and providers. The result: enterprise transactions will continue uninterrupted if they lose access to their physical infrastructure in the event of an outage.

Cloud-based Artificial Intelligence (AI) and Machine Learning (ML): Another hot cloud trend is the use of Artificial Intelligence (AI) and Machine Learning (ML). Both AI and ML allow organizations to cut through the data deluge and process and analyze the data to make informed business decisions and quickly respond to current and future market trends.

Top cloud vendors diversify, differentiate their offerings

There are dozens of cloud providers with more entering this lucrative market arena all the time. However, the top four vendors: Amazon AWS, Microsoft Azure, Google Cloud and IBM Cloud currently account for over 70% of the installed base.

Amazon AWS: Amazon AWS has been the undisputed cloud market leader for the past decade. And it remains the number one vendor in 2022. Simply put, Amazon is everywhere and it has amazing brand recognition. Amazon AWS offers a wide array of services that appeal to companies of all sizes. The AWS cloud-based platform enables companies to build customized business solutions using integrated Web services. AWS also offers a broad portfolio of Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS).  These include Elastic Cloud Compute (EC2), Elastic Beanstalk, Simple Storage Service (S3) and Relational Database Service (RDS). AWS also enables organizations to customize their infrastructure requirements and it provides them with a wide variety of administrative controls via its secure Web-based client. Other key features include: data backup and long-term storage; Service Level Agreement (SLA) of “four nines” – 99.99% – guaranteed SLA uptime;  AI and ML capabilities; automatic capacity scaling; support for virtual private clouds and free migration tools.

As with all of the cloud vendors, the devil is in the details when it comes to pricing and cost. On the surface, the pricing model appears straightforward. AWS offers three different pricing options. They are “Pay as you Go,” “Save when you reserve” and “Pay less using more.”  AWS also offers a free 12-month plan. Once the trial period has expired, the customer must either choose a paid plan or cancel its AWS subscription. While Amazon does provide a price calculator to estimate potential cloud costs, the many variables make it confusing to discern.

Microsoft Azure: Microsoft Azure ranks close behind Amazon AWS and the platform has been the catalyst for the Redmond, Washington software giant’s resurgence over the last 12 years. As Microsoft transitioned away from its core Windows-based business model, it used a tried and true success strategy: that is, the integration and interoperability of its various software offerings.  Microsoft also moved its popular and well-entrenched legacy on-premises software application suites like Microsoft Office, SharePoint, SQL Server and others to the cloud. This gave customers a sense of confidence and familiarity when it came to adoption. Microsoft also boasts one of the tech industry’s largest partner ecosystem. Microsoft regularly refreshes and updates its cloud portfolio. In February, Microsoft unveiled three industry-specific cloud offerings: Microsoft Cloud for Financial Services, Microsoft Cloud for Manufacturing and Microsoft Cloud for Nonprofit. All of these services leverage the company’s security and AI functions. For example,  new feature in Microsoft Cloud for Financial Services, called Loan Manager will enable lenders to close loans faster by streamlining workflows and increasing transparency through automation and collaboration.  Microsoft Azure offers all the basic and advanced cloud features and functions including: data backup and storage; business continuity and DR solutions; capacity planning; business analytics; AI and ML; single sign-on (SSO) and multifactor authentication as well as serverless computing. Ease of configuration and management are among its biggest advantages, and Microsoft does an excellent job of regularly updating the platform, but documentation and patches may lag a bit. Azure also offers a 99.95% SLA uptime guarantee which is a bit less than “four nines.”  Again, the biggest business challenge for existing and prospective Azure customers is figuring out the licensing and pricing model to get the best deal.

Google Cloud Platform (GCP): Like Amazon, Google is a ubiquitous entity with strong brand name recognition. Google touts its ability to enable customers to scale their business as needed using flexible, open technology. Google Cloud consists of over 150 products and developer tools. GCP is a suite of cloud computing services provided by Google. It is a public cloud computing platform consisting of a variety of IaaS and PaaS services like compute, storage, networking, application development and Big Data analytics. The GCP services all run on the same cloud infrastructure that Google uses internally for its end-user products, such as Google Search, Photos, Gmail and YouTube, etc. The GCP services can be accessed by software developers, cloud administrators and IT professionals over the internet or through a dedicated network connection. Notably, Google developed Kubernetes, an open source container standard that automates software deployment, scaling and management. GCP offers a wide array of cloud services including: storage and backup, application development, API management, virtual private clouds, monitoring and management services, migration tools, AI and ML. In order to woo customers, Google does offer very steep discounts and flexible contracts.

IBM: It’s no secret that IBM Cloud lagged behind market leaders AWS and Microsoft Azure, but Big Blue shifted into overdrive to close the gap. Most notably, IBM’s 2019 acquisition of Red Hat for $34 billion gave IBM much needed momentum, solidifying its hybrid cloud foundation and expanding its global cloud reach to 175 countries with over 3,500 hybrid cloud customers. And it shows. On April 19, IBM told Wall Street it expects to hit the top end of its revenue growth forecast for 2022. IBM’s Cloud & Data Platforms unit is the growth driver Cloud revenue grew 14% to $5 billion during the just ended March 31 quarter. Software and consulting sales which represent over 70% of IBM’s business were up 12% and 13%, respectively. IBM Cloud incorporates a host of cloud computing services that run on IaaS or PaaS.  And the Red Hat Open Shift platform further fortifies IBM’s hybrid cloud initiatives. Open Shift is an enterprise-ready Kubernetes container platform built for an open hybrid cloud strategy. It provides a consistent application platform to manage hybrid cloud, multicloud, and edge deployments. According to IBM, 47 of the Fortune 50 companies use IBM as their private cloud provider.  IBM has upped its cloud game with several key technologies. They include advanced quantum safe cryptography which safeguards applications running on the IBM z16 mainframe which is popular with high end IBM enterprise customers. Quantum-safe cryptography is as close to unbreakable or impenetrable encryption as a system can get. It uses quantum mechanics to secure and transmit data in a way that currently makes it near-impossible to hack. Another advanced feature is the AI on-chip inferencing, available on the newly announced IBM z16 mainframe. It can deliver up to 300 billion deep learning inference operations per day with 1ms response time. This will enable even non-data scientist customers to cut through the data deluge and predict and automate for “increased decision velocity.”  AI on-chip inferencing can help customers prevent fraud before it happens by scoring up to 100% of transactions in real-time without impacting Service Level Agreements (SLAs). AI on-chip inferencing can also assist companies with compliance; automating the process to allow firms to cut audit preparation time from one month to one week to maintain compliance and avoid fines and penalties. The IBM Cloud also incorporates the Keep Your Own Key (KYOK) which uses z Hyperprotect in the IBM public cloud.  Another key security differentiator is IBM’s Confidential Computing which protects sensitive data by performing computation in a hardware-based trusted execution environment (TEE). IBM Cloud goes beyond confidential computing by protecting data across the entire compute lifecycle. This provides customers with a higher level of privacy assurance – giving them complete authority over data at rest, data in transit and data in use. IBM further distinguishes its IBM Cloud from competitors via its extensive work in supporting and securing regulated workloads, particularly for Financial Services companies. The company’s Power Systems enterprise servers are supported in the IBM Cloud as well. IBM Cloud also offers full server customization; everything included in the server is handpicked by the customer so they don’t have to pay for features they may never use. IBM is targeting its Cloud offering at customers that want a hybrid, highly secure, open, multi-cloud and manageable environment.

Conclusions

Cloud computing adoption – most especially the hybrid cloud model – will continue to accelerate throughout 2022 and beyond. At the same time, vendors will continue to promote AI, machine learning and analytics as advanced mechanisms to help enterprises derive immediate, greater value and actionable insights to drive revenue and profitability.

Security and compliance issues will also be must-have crucial elements of every cloud deployment. Organizations now demand a minimum of four nines of uptime – and preferably, five and six nines of availability – 99.999% and 99.9999% to ensure uninterrupted business continuity. Vendors, particularly IBM with its newly Quantum-safe cryptography capabilities for its infrastructure and IBM Z mainframe, will continue to fortify cloud security and deploy AI.

 

 

The Cloud Gets Crowded and more Competitive Read More »

High Tech R&D in the COVID-19 Era is Crucial

https://www.technewsworld.com/story/86977.html

Maintaining and increasing research and development (R&D) spending in the COVID-19 era is critical for high technology vendors to deliver new solutions and services, continue to innovate and position their businesses to rebound from the negative effects of the global pandemic.

The COVID-19 global pandemic has been disastrous for business around the globe. The nouvel Corona virus has disrupted and continues to upend every aspect of corporate and personal daily life. Analysts and financial advisors/investors concur that wherever possible vendors should continue to aggressively invest in R&D. That is: spend money to make money. …

High Tech R&D in the COVID-19 Era is Crucial Read More »

ITIC Poll: Human Error and Security are Top Issues Negatively Impacting Reliability

Multiple issues contribute to the high reliability ratings among the various server hardware distributions.  ITIC’s 2018 Global Server Hardware, Server OS Reliability Mid-Year Update reveals that three issues in particular stand out as positively or negatively impacting reliability. They are: Human Error, Security and increased workloads.

ITIC’s 2018 Global Server Hardware, Server OS Reliability Mid Year Update polled over 800 customers worldwide from April through mid-July 2018. In order to obtain the most objective and unbiased results, ITIC accepted no vendor sponsorship for the Web-based survey.

Human Error and Security Are Biggest Reliability Threats

ITIC’s latest 2018 Reliability Mid Year update poll also chronicled the strain that external issues placed on organizations and their IT departments to ensure that the servers and operating systems deliver a high degree of reliability and availability.  As Exhibit 1 illustrates, human error and security (both from internal and external hacks) continue to rank as the chief culprits that cause unplanned downtime among servers, operating systems and applications for the fourth straight year.  After that, there is a drop off of 22 to 30 percentage points for the remaining issues ranked in the top five downtime causes. Both human error and reliability have had the dubious distinction of maintaining the top two factors precipitating unplanned downtime in the past five ITIC reliability polls.

Analysis

Reliability is a two-way street in which server hardware, OS and application vendors as well as corporate users both bear responsibility for the reliability of their systems and networks.

On the vendor side, there are obvious reasons why hardware makers like HPE, IBM and Lenovo mission critical servers consistently gain top reliability ratings. As ITIC noted in Part 1 of its reliability survey findings, the reliability gap between high end systems and inexpensive, commodity servers with basic features continue to grow. They include:

  • Research and Development (R&D) Vendors like Cisco, HPE, Huawei, IBM and Lenovo have made an ongoing commitment to research and development (R&D) and continually refresh/update their solutions.
  • RAS 2.0.The higher end servers incorporate the latest Reliability, Accessibility and Serviceability (RAS) 2.0 features/functions and are fine-tuned for manageability and security.
  • Price is not the top consideration. Businesses that purchase higher end mission critical and x86 systems like Fujitsu’s Primergy, HPE’s Integrity, Huawei’s KunLun, IBM Z and Power Systems and Lenovo System x want a best in class product offering, first and foremost. These corporations in verticals like banking/finance, government, healthcare, manufacturing, retail and utilities are more motivated with the historical ability of the vendor to act as a true responsive “partner” delivering a highly robust, leading edge hardware. They also want top-notch after market technical service and support, quick response to problems and fast, efficient access to patches and fixes.
  • More experienced IT Managers. In general, IT Managers, application developers, systems engineers and security professionals at corporations which purchase higher end servers from IBM, HPE, Lenovo, and Huawei tend to have more experience. The survey found that organizations that buy mission critical servers have IT and technical staff that boast approximately 12 to 13 years experience. By contrast, the average experience among IT managers and systems engineers at companies that purchase less expensive commodity based servers is about six years.

Highly experienced IT managers are more likely to spot problems before they become a major issue and lead to downtime and in the event of an outage. They are also more likely to perform faster remediation, accelerating the time it takes to identify the problem and get the servers and applications up and running faster than less experienced peers.

In an era of increasingly connected servers, systems, applications, networks and people, there are myriad factors that can potentially undercut reliability; they are:

  • Human Error and Security. To reiterate, these two factors constitute the top threats to reliability. ITIC does not anticipate this changing in the foreseeable future. Some 59% of respondents cited Human Error as their number one issue, followed by 51% that said Security problems caused downtime. And nearly two-thirds — 62% — of businesses indicated that their Security and IT administrators grapple with a near constant deluge of more pervasive and pernicious security threats. If the availability, reliability and access to servers, operating systems and mission critical main LOB applications is compromised or denied, end user productivity and business operations suffer immediate consequences.
  • Heavier, more data intensive workloads. The latest ITIC survey data finds that workloads have increased by 14% to 39% over the past 18 months.
  • A 60% majority of respondents say increased workloads negatively impact reliability; up 15% percentage points since 2017. Of that 60%, approximately 80% of firms experiencing reliability declines have commodity servers: e.g., White box; older Dell, HPE ProLiant and Oracle hardware >3 ½ years old that haven’t been retrofitted/upgraded.
  • Provisioning complex new applications that must integrate and interoperate with legacy systems and applications. Some 40% of survey respondents rate application deployment and provisioning as among their biggest challenges and one that can negatively impact reliability.
  • IT Departments Spending More Time Applying Patches. Some 54% of those polled indicated they are spending upwards of one hour to over four hours applying patches – especially security patches. Users said the security patches are large, time consuming and often complex, necessitating that they test and apply them manually. The percentage of firms automatically applying patches commensurately decreased from 30% in 2016 to just 9% in the latest 2018 poll. Overall, the latest ITIC survey shows that as of July 2018 companies are applying 27% more patches now than any time since 2015.
  • Deploying new technologies like Artificial Intelligence (AI), Big Data Analytics which require special expertise by IT managers and application developers as well as a high degree of compatibility and interoperability.
  • A rise in Internet of Things (IoT) and edge computing deployments which in turn, increase the number of connections that organizations and their IT departments must oversee and manage.
  • Seven-in-10 or 71%of survey respondents said aged hardware (3 ½+ years old) had a negative impact on server uptime and reliability compared with just 16% that said the older servers had not experienced any declines in reliability or availability. This is an increase of five percentage points from the 66% of those polled who responded positively to that survey question in the ITIC 2017 Reliability Survey and it’s a 27% increase from the 44% who said outmoded hardware negatively impacted uptime in the ITIC 2014 Reliability poll.

Corporations Minimum Reliability Requirements Rise

At the same time, corporations now require higher levels of reliability than they did even two o three years ago. The reliability and continuous operation of the core infrastructure and its component parts: server hardware, server operating system software, applications and other devices (e.g. firewalls, unified communications devices and uninterruptible power supply) are more crucial than ever to the organization’s bottom line.

It is clear that corporations – from the smallest companies with fewer than 25 people, to the largest multinational concerns with over one hundred thousand employees, are more risk averse and concerned about the potential risk for lawsuits and the damage to their reputation in the wake of an outage. ITIC’s survey data now indicates that an 84% majority of organizations now require a minimum of “four nines” – 99.99% reliability and uptime.

This is the equivalent of 52 minutes of unplanned outages related to downtime for mission critical systems and applications or just 4.33 minutes of unplanned monthly outage for servers, applications and networks.

Conclusions

The vendors are one-half of the equation. Corporate users also bear responsibility for the reliability of their servers and applications based on configuration, utilization, provisioning, management and security.

To minimize downtime and increase system and network availability it is imperative that corporations work with vendor partners to ensure that reliability and uptime are inherent features of all their servers, network connectivity devices, applications and mobile devices. This requires careful tactical and strategic planning to construct a solid strategy.

Human error and security are and will continue to pose the greatest threats to the underlying reliability and stability of server hardware, operating systems and applications. A key element of every firm’s reliability strategy and initiative is to obtain the necessary training and certification for IT managers, engineers and security professionals. Companies should also have their security professionals take security awareness training. Engaging the services of third party vendors to conduct security vulnerability testing to identify and eliminate potential vulnerabilities is also highly recommended.  Corporations must also deploy the appropriate Auditing, BI and network monitoring tools. Every 21st Century network environment needs continuous, comprehensive end-to-end monitoring for their complex, distributed applications in physical, virtual and cloud environments.

Ask yourself: “How much reliability does the infrastructure require and how much risk can the company safely tolerate?”

ITIC Poll: Human Error and Security are Top Issues Negatively Impacting Reliability Read More »

IBM, Lenovo Servers Deliver Top Reliability, Cisco UCS, HPE Integrity Gain

 IBM z Systems Enterprise; IBM Power Systems Servers Most Reliable for Ninth Straight Year;  Lenovo x86 Servers Deliver Highest Uptime/Availability among all Intel x86-based Systems

For the ninth year in a row, corporate enterprise users said IBM’s z Systems Enterprise mainframe class server achieved near flawless reliability, recording less than 10 seconds of unplanned per server downtime each month. Among mainstream servers,  IBM Power Systems devices and the Lenovo x86 platform delivered the highest levels of reliability/uptime among 14 server hardware and 11 different server hardware virtualization platforms.

Those are the results of the ITIC 2017 Global Server Hardware and Server OS Reliability survey which polled 750 organizations worldwide during April/May 2017.

Among the top survey findings:

  • IBM z Systems Enterprise mainframe class systems, had the lowest incident – 0% — of > 4 hours of per server/per annum downtime of any hardware platform. Specifically, IBM z Systems mainframe class servers exhibit true mainframe fault tolerance experiencing just 0.96 minutes of   of unplanned per server annual downtime. That equates to 8 seconds per month or “blink and you miss it,” 2 seconds of unplanned weekly downtime. This is an improvement over the 1.12 minutes of per server/per annum downtime the z Systems servers recorded in ITIC’s 2016 – 2017 Reliability poll nine months ago.
  • Among mainstream hardware platforms, IBM Power Systems and Lenovo System x running Linux have least amount of unplanned downtime 2.5 and 2.8 minutes per server/per year of any mainstream Linux server platforms.
  • 88% of IBM Power Systems and 87% of Lenovo System x users running RHEL, SuSE or Ubuntu Linux experience fewer than one unplanned outage per server, per year.
  • Tenly two percent of IBM and Lenovo servers recorded >4 hours of unplanned per server/per annum downtime; followed by six percent of HPE servers; eight percent of Dell servers and 10% of Oracle servers.
  • IBM and Lenovo hardware and the Linux operating system distributions were either first or second in every reliability category, including virtualization and security.
  • Lenovo x86 servers achieved the highest reliability ratings among all competing x86 platforms
  • Lenovo Takes Top Marks for Technical Service and Support: Lenovo tech support the best followed by Cisco and IBM
  • Some 66% of survey respondents said aged hardware (3 ½+ years old) had a negative impact on server uptime and reliability vs. 21% that said it has not impacted reliability/uptime. This is 22% increase from the 44% who said outmoded hardware negatively impacted uptime in 2014
  • Reliability continues to decline for the fifth year in a row on the HP ProLiant and Oracle’s SPARC & x86 hardware and Solaris OS. Reliability on the Oracle platforms declined slightly mainly due to aging. Many Oracle hardware customers are eschewing upgrades, opting instead to migrate to rival platforms.
  • Some 16% of Oracle customers rated service & support as Poor or Unsatisfactory. Dissatisfaction with Oracle licensing and pricing policies remains consistently high for the last three years.
  • Only 1% of Cisco, 1% of Dell, 1% of IBM and Lenovo, 3% of HP, 3% of Fujitsu and 4% of Toshiba users gave those vendors “Poor” or “Unsatisfactory” customer support ratings.

IBM, Lenovo Servers Deliver Top Reliability, Cisco UCS, HPE Integrity Gain Read More »

IBM z13s Delivers Power, Performance, Fault Tolerant Reliability and Security for Hybrid Clouds

Security. Reliability. Performance. Analytics. Services.

These are the most crucial considerations for corporate enterprises in choosing a hardware platform. The underlying server hardware functions as the foundational element for the business’ entire infrastructure and interconnected environment. Today’s 21st century Digital Age networks are characterized by increasingly demand-intensive workloads; the need to use Big Data analytics to analyze and interpret the massive volumes and variety of data to make proactive decisions and keep the business competitive. Security is a top priority. It’s absolutely essential to safeguard sensitive data and Intellectual Property (IP) from sophisticated, organized external hackers and defend against threats posed by internal employees.

The latest IBM z13s enterprise server delivers embedded security, state-of-the-art analytics and unparalleled reliability, performance and throughput. It is fine tuned for hybrid cloud environments. And it’s especially useful as a secure foundational element in Internet of Things (IoT) deployments. The newly announced, z13s is highly robust: it supports the most compute-intensive workloads in hybrid cloud and on-premises environments. The newest member of the z Systems family, the z13s, incorporates advanced, embedded cryptography features in the hardware that allow it to encrypt and decrypt data twice as fast as previous generations, with no reduction in transactional throughput owing to the updated cryptographic coprocessor for every chip core and tamper-resistant hardware-accelerated cryptographic coprocessor cards. …

IBM z13s Delivers Power, Performance, Fault Tolerant Reliability and Security for Hybrid Clouds Read More »

IBM, Lenovo Top ITIC 2016 Reliability Poll; Cisco Comes on Strong

IBM Power Systems Servers Most Reliable for Seventh Straight Year; Lenovo x86 Servers Deliver Highest Uptime/Availability among all Intel x86-based Systems; Cisco UCS Stays Strong; Dell Reliability Ratchets Up; Intel Xeon Processor E7 v3 chips incorporate advanced analytics; significantly boost reliability of x86-based servers

In 2016 and beyond, infrastructure reliability is more essential than ever.

The overall health of network operations, applications, management and security functions all depend on the core foundational elements: server hardware, server operating systems and virtualization to deliver high availability, robust management and solid security. The reliability of the server, server OS and virtualization platforms are the cornerstones of the entire network infrastructure. The individual and collective reliability of these platforms have a direct, immediate and long lasting impact on daily operations and business results. For the seventh year in a row, corporate enterprise users said IBM server hardware delivered the highest levels of reliability/uptime among 14 server hardware and 11 different server hardware virtualization platforms. A 61% majority of IBM Power Systems servers and Lenovo System x servers achieved “five nines” or 99.999% availability – the equivalent of 5.25 minutes of unplanned per server /per annum downtime compared to 46% of Hewlett-Packard servers and 40% of Oracle server hardware. …

IBM, Lenovo Top ITIC 2016 Reliability Poll; Cisco Comes on Strong Read More »

Why the Gaming Industry is a “Target Rich Environment” for DDoS Security Hacks

ITIC’s coverage areas continue to expand and evolve based on your feedback. Our Website content is growing as well. We will now feature content industry expert “Guest Bloggers.” Debbie Fletcher examines DDoS hacks on popular games.

***

By Debbie Fletcher

Ask any gamer; timing is everything. Even the smallest disruption in gameplay can be a virtual disaster in a heated competition.

Hackers understand the fragility of these networks, and they are willing to manipulate them for their own gain. Read on to find out DDoS is getting in the game, and how it is disrupting some of the most active and profitable networks in the world. …

Why the Gaming Industry is a “Target Rich Environment” for DDoS Security Hacks Read More »

IBM Watson Takes Cognitive Computing to the Head of the Class

Pardon the pun, but there’s nothing elementary about IBM’s newly formed, New York City-based Watson Business Unit (BU).

IBM is committing $1 billion and 2,000 employees, as well as its considerable research and development (R&D) talents and marketing muscle to Watson, thus putting the full weight of its global technology and services brand behind the newly formed BU and initiative.

IBM CEO Virginia Rometty said that Michael Rhodin, most recently SVP of IBM’s Software Solutions Group, will take charge of the Watson Group. According to Rometty, the company established Watson as a separate BU based on the strong demand for cognitive computing. The IBM Watson Group will develop cloud-based technologies that can power services for businesses, industries and consumers.

Rometty also said the new IBM Watson Group notably integrates design, services, core functions, technologies, and a fully formed ecosystem which includes a design lab as well as hundreds of outside external partner applicants, foundations and advisors. All of these elements are crucial if Watson is to succeed. …

IBM Watson Takes Cognitive Computing to the Head of the Class Read More »

IBM Platform Resource Scheduler Automates, Accelerates Cloud Deployments

One of the most daunting and off-putting challenges for any enterprise IT department is how to efficiently plan and effectively manage cloud deployments or upgrades while still maintaining the reliability and availability of the existing infrastructure during the rollout.

IBM solves this issue with its newly released Platform Resource Scheduler which is part of the company’s Platform Computing portfolio and an offering within the IBM Software Defined Environment (SDE) vision for next generation cloud automation. The Platform Resource Scheduler is a prescriptive set of services designed to ensure that enterprise IT departments get a trouble-free transition to a private, public or private cloud environment by automating the most common placement and policy procedures of their virtual machines (VMs). It also helps guarantee quality of service while greatly reducing the most typical human errors that occur when IT administrators manually perform tasks like load balancing and memory balancing. The Platform Resource Scheduler is sold with IBM’s SmartCloud Orchestrator and PowerVC and is available as an add-on with IBM SmartCloud Open Stack Entry products. It also features full compatibility with Nova APIs and fits into all IBM OpenStack environments. It is built on open APIs, tools and technologies to maximize client value, skills availability and easy reuse across hybrid cloud environments. It supports heterogeneous (both IBM and non-IBM) infrastructures and runs on Linux, UNIX and Windows as well as IBM’s zOS operating systems. …

IBM Platform Resource Scheduler Automates, Accelerates Cloud Deployments Read More »

Two-Thirds of Corporations Now Require 99.99% Database Uptime, Reliability

A 64% majority of organizations now require that their databases deliver a minimum of four, “nines” of uptime 99.99% or better for their most mission critical applications . That is the equivalent of 52 minutes of unplanned downtime per database/per annum or just over one minute of downtime per week as a result of an unplanned outage.

Those are the results of ITIC’s 2013 – 2014 Database Reliability and Deployment Trends Survey, an independent Web-based survey which polled 600 organizations worldwide during May/June 2013. The nearly two-thirds of respondents who indicated they need 99.99% or greater availability is a 10% increase over the 54% who said they required a minimum of four nines reliability in ITIC’s 2011-2012 Database Reliability survey.

This trend will almost certainly continue unabated owing in large part to an increase in mainstream user deployments of databases running Big Data Analytics, Business Intelligence (BI), Customer Relationship Management (CRM) and Enterprise Resource Planning (ERP) applications. These applications are data intensive and closely align with organizations’ main-line-of-business and recurring revenue stream. Hence, any downtime on a physical, virtual or cloud-based DB will likely cause immediate disruptions that will quickly impact the corporation’s bottom line. …

Two-Thirds of Corporations Now Require 99.99% Database Uptime, Reliability Read More »

Scroll to Top