One of the most daunting and off-putting challenges for any enterprise IT department is how to efficiently plan and effectively manage cloud deployments or upgrades while still maintaining the reliability and availability of the existing infrastructure during the rollout.
IBM solves this issue with its newly released Platform Resource Scheduler which is part of the company’s Platform Computing portfolio and an offering within the IBM Software Defined Environment (SDE) vision for next generation cloud automation. The Platform Resource Scheduler is a prescriptive set of services designed to ensure that enterprise IT departments get a trouble-free transition to a private, public or private cloud environment by automating the most common placement and policy procedures of their virtual machines (VMs). It also helps guarantee quality of service while greatly reducing the most typical human errors that occur when IT administrators manually perform tasks like load balancing and memory balancing. The Platform Resource Scheduler is sold with IBM’s SmartCloud Orchestrator and PowerVC and is available as an add-on with IBM SmartCloud Open Stack Entry products. It also features full compatibility with Nova APIs and fits into all IBM OpenStack environments. It is built on open APIs, tools and technologies to maximize client value, skills availability and easy reuse across hybrid cloud environments. It supports heterogeneous (both IBM and non-IBM) infrastructures and runs on Linux, UNIX and Windows as well as IBM’s zOS operating systems.
IBM Platform Resource Scheduler Overview
IBM’s Platform Resource Scheduler provides businesses with an automated policy template for each of the most common cloud policies. These include:
- Packing: Packs the workload onto the fewest physical servers possible to maximize usable capacity, lower energy consumption and reduce fragmentation.
- Striping: Spreads the workload across as many physical servers as possible, thus reducing the impact of host failures and resulting in higher application performance.
- Load Balance: Allocate physical servers with lowest load for new workloads, resulting in higher application performance.
- Memory Balance: Place VMs on the hosts with the most available memory to improve application performance.
- Affinity: Affinity specifies that specific VMs should be placed on the same host or a few hosts. Also useful for co-locating VMs on the same host(s).
- Anti-Affinity: Places workload close to critical resources, such as storage to support higher application performance
- Resource Over-commit: A policy to intentionally over-commit resources to maximize utilization and a selectable policy to compensate and over-commit ratios for memory, disk and CPU to offset any denigration in system performance.
- User-Defined: An Open-API that defines scriptable business policies and resource contention policies.
This type of automated policy based management functionality is crucial to businesses because it helps them make more intelligent and informed decisions tailored to their specific workload and application environments. As a result, IBM’s Platform Resource Scheduler can lead to more efficient use of network and system resources and ongoing resource optimization; it can also serve to reduce infrastructure costs and lower utility and energy consumption.
Data Analysis: Cloud Deployment Trends
All classes of businesses from small and midsized to the largest enterprises are transitioning to cloud environments. At present, private clouds are the most popular among organizations that want to keep a measure of control over data sensitive applications. Among companies that are just beginning to experiment with a cloud environments, many start by migrating lower level applications. Increasingly, though, businesses are migrating more of their business critical functions to private, public and hybrid clouds.
This trend will continue unabated. As more and more business critical and big data applications move to the cloud, system and network availability, reliability and security will be absolutely essential to ensure ongoing business operations. Additionally, all businesses must contend with an ever more stringent array of compliance regulations.
ITIC’s 2013 Global Server Hardware and Server OS Reliability survey data, which polled more than 550 businesses in January and again in August, found that over two-thirds of respondents – 67% – consider 99.95 % and higher to be the minimum acceptable levels of reliability for their main line of business (LOB) servers. The accepted metrics of three or four nines of uptime – 99.9%, 99.95% and 99.99%, – equate to 8.76 hours; 4.38 hours and 52.56 minutes of per server, per annum downtime, respectively . Additionally, the most recent ITIC poll shows that 11% of the 550 respondent organizations require a very high “five nines” or 99.999% degree of reliability or a scant 5.25 minutes of unplanned per server, per annum downtime.
A 57% majority of respondents to ITIC’s 2013 Technology Deployment Trends and Challenges survey, which polled over 500 businesses in October 2013, cited “provisioning new applications and desktops” and the “overall complexity of the cloud upgrade” as their top migration challenges.
It is clear that organizations require high reliability, high availability and strong security in order to satisfy their business needs and to fulfill their Service Level Agreements. It’s equally true that there is a high degree of complexity associated with migrating business critical and big data applications including databases, Big Data, business intelligence (BI), customer relationship management (CRM) and enterprise resource management (ERP) applications, to name a few.
Conclusions
IT departments, especially enterprise IT departments that are overburdened with daily management tasks that leave little time for training, need all the help they can get with respect to complicated cloud migrations. There is little margin for error when even a few minutes of network downtime can disrupt operations and productivity, and can cost thousands or even millions depending on the application and duration of the outage.
The pressure to get an upgrade right the first time, is intense. To help address these issues, IBM’s Platform Resource Scheduler provides businesses with a set of prescriptive pre-defined set of policies for cloud deployments. It uses standard APIs and support an open architecture, but at the same time it is highly customizable.
In addition, the IBM Platform Resource Scheduler also performs another invaluable service: its high availability features can reduce the time it takes IT departments to migrate VMs to the cloud from days and hours to minutes, and dramatically cut down on manual configuration errors. It can also reduce the manpower needed to perform the cloud migration by 30% to 60%, depending on the size and scope of an organization’s cloud implementation. Other tangible benefits include:
- The ability to automatically place workloads for optimal quality of service (QoS)
- An overall reduction in infrastructure costs (e.g. power consumption and utility costs)
- Guaranteed capacity for priority workloads
IBM’s Platform Resource Scheduler also includes automated features that can enhance business agility by enabling administrators to dynamically reconfigure disparate, heterogeneous resources according to specific application requirements and real-time demands. And it increases flexibility with an open, extensible architecture that incorporates configurable, out-of-the-box policies, and user-defined policies to support multiple virtualization platforms and open architectures. It is also highly customizable according to the corporation’s needs.
In summary IBM’s Platform Resource Scheduler enables organizations to prioritize and consolidate their most resources for crucial business critical and big data applications while maintaining the necessary high levels of application and network reliability and security necessary to meet SLAs and compliance goals.