Search Results for: downtime

IBM Powers Up New PowerLinux Products, Strategy

IBM this week unveiled its latest generation of industry standard Linux-only servers optimized for its Power architecture along with a new strategy targeting specific x86 applications and workloads.

IBM has been a longtime Linux proponent, supporting industry standard distributions like Red Hat Enterprise Linux (RHEL) and SUSE Linux Enterprise – on its Power Systems line for the last 12 years. This week’s announcement reaffirms Big Blue’s commitment to Linux and broadens its scope with offerings designed to drive more growth for the Power platform in the lucrative x86 arena. IBM will fuel this growth via its mantra, “Tuned to the task,” which emphasizes delivering higher quality and superior economics than rivals.

According to Scott Handy, vice president of IBM’s PowerLinux Strategy and Business Development, “This is an extension to our overall Power strategy to address the Linux x86 space and drive more growth for our Power Systems servers.” …

IBM Powers Up New PowerLinux Products, Strategy Read More »

IBM STG Group Posts Positive Gains, Offers Strong Strategy & Growth Roadmap

Vendor sponsored Analyst conferences are oftentimes long on self-congratulatory hyperbole and short on substance. That wasn’t the case with IBM’s Systems and Technology Group Analyst conference held last week in Rye Brook, NY.

The STG conference, led by Rod Adkins, Senior Vice President of the STG Group, showcased the division’s solid accomplishments over the last several years and detailed the current and future product roadmap and investment strategy. Investments focused around three major areas: Systems, growth markets and strategic acquisitions. Adkins could have easily added a fourth category: patents. The U.S. Patent Office granted IBM’s STG division 2,680 patents in 2010 and it could exceed that number in 2011. One only has to scan the headlines and peruse the ongoing patent purchasing frenzy and the plethora of lawsuits involving all of the major vendors to realize the pivotal role patents play as both and offensive and defensive weapon. IBM, in its Centenary year, holds more patents than any other U.S. technology vendor.

STG 2011 Milestones

Noting that STG is aligned with IBM’s overall growth strategy, Adkins detailed the division’s milestones throughout the first three quarters in 2011. They included: …

IBM STG Group Posts Positive Gains, Offers Strong Strategy & Growth Roadmap Read More »

IBM, Stratus, Microsoft Score High Marks in ITIC Fall 2011 Global Reliability Survey

For the third year in a row, IBM AIX v7.1 UNIX operating system (OS) running on the company’s Power System servers scored the highest reliability ratings and recorded the least amount of overall downtime from Tier 1, Tier 2 and Tier 3 outages among 18 different server OS platforms.

Over three-quarters or 78% of survey respondents indicated they experienced less than one of the most prevalent, minor Tier 1 incidents per server, per annum on IBM’s AIX v. 5.3 and AIX v 7.1 distributions. An 83% majority of IBM AIX v 7.1 and Novell SUSE Enterprise Linux Server 11 and 82% of Windows Server 2008 R2 survey respondents indicated their organizations experienced less than one unplanned, severe/lengthy Tier 3 outage per server, per annum (See Exhibit 1).

Microsoft’s Windows Server 2008 R2 (which scored the biggest year-over-year reliability gains), and Novell’s SUSE Enterprise Linux Server 11 closely challenged IBM’s AIX v 7.1 server OS reliability and uptime – particularly with respect to the most severe and costly Tier 3 outages. Unplanned Tier 3 outages – whether manmade or as the result of a disaster — typically cause downtime in excess of four hours. There is widespread disruption of applications and network operations; customers and business partners are frequently impacted and Tier 3 incidents will almost always require remediation by a significant portion of the IT staff. …

IBM, Stratus, Microsoft Score High Marks in ITIC Fall 2011 Global Reliability Survey Read More »

ITIC 2011 Reliability Survey: Users Give IBM AIX v7, Windows Server 2008 R2 Highest Security Marks

IBM AIX v7 and Windows Server 2008 R2 Highest Security Marks

Nine out of 10 — 90% — of the 470 respondents to ITIC’s 2010-2011 Global Server Hardware and Server OS Reliability survey rated the security of Microsoft’s Windows Server 2008 R2 and IBM’s AIX v7 as “Excellent” or “Very Good.” This was the highest security ratings out of 18 different Server Operating System distributions (See Exhibit below). Three-quarters or 75% of survey participants gave HP UX 11i v3 “Excellent” or “Very Good” security ratings; this was the third highest ranking of the 18 major server OS distributions polled. This was followed by Ubuntu Server 10 and Debian GNU/Linux 5, which tied for fourth. Seven out of 10 survey participants — 71% — of those polled ranked the two most popular open source distributions’ security as “Excellent” or “Very Good.” Red Hat Enterprise Linux v 5.5 and Novell SuSE Linux Enterprise 11, the two most widely deployed Linux distributions trailed Debian and Ubuntu but were nearly tied with each other in security rankings. Just over two-thirds — 67% — of Red Hat users rated its security as “Excellent or Very Good” while 66% of survey participants judging Novell SuSE Linux Enterprise 11 security to be “Excellent” or “Very Good.”

Some 58% of Apple Mac OS X 10.6 survey respondents rated its security as “Excellent” or “Very Good,” putting it at the bottom of the pack, beating only Oracle’s Solaris 10 which was rated “Excellent” or “Very Good” by 63% of respondents, which in the past two years has been notching modest gains among corporate users.

Also noteworthy was the fact that only a very small percentage of respondents gave thumbs down “Poor” or “Unsatisfactory” security grades to their server operating system vendors. In this category, Apple had the highest percentage of respondents – 7% — who gave its Mac OS X 10.6 both “Poor” and “Unsatisfactory” marks. This might appear puzzling to some since Apple’s users have long touted the security of the platform. Apple users have long boasted about the fact that there are far fewer viruses and malicious code written targeting Macs compared to Windows. However, now that Apple is once again re-emerging as a significant presence in corporate networks, the Mac OS X 10.6 will no longer enjoy the “security by obscurity” that it claimed as a standalone consumer OS. Macs, iPhones, iPads and tablets are becoming mainstream staples as business tools. Hence, the number of exploits, including such malware as worms, Trojans and bots that target the Mac is increasing commensurately. Apple will have to respond accordingly with tighter security. …

ITIC 2011 Reliability Survey: Users Give IBM AIX v7, Windows Server 2008 R2 Highest Security Marks Read More »

ITIC 2011 Reliability Shows that Dell, HP, IBM & Stratus Score High Marks for Service & Support

Dell, HP, IBM and Stratus Technologies won high praise from corporate users for their prompt and efficient after market technical service and support in the latest ITIC 2010-2011 Global Server Hardware and Server OS Reliability survey.

The results came from a broad based survey that polled organizations worldwide on the reliability, security and technical service and support from among 14 of the leading server hardware platforms and 18 of the most widely deployed server operating system distributions.

As we said in an earlier discussion, each poll elicits some surprising and unexpected revelations. In this survey, users reserved their highest encomiums and most critical barbs for the server hardware vendors – both in terms of product performance and reliability and the service and support they receive from their respective vendors. …

ITIC 2011 Reliability Shows that Dell, HP, IBM & Stratus Score High Marks for Service & Support Read More »

IBM, Stratus, HP, Fujitsu Top ITIC/GFI Software Hardware Reliability Survey

For the third year in a row, IBM AIX Unix operating system (OS) running on the company’s Power System servers scored the highest reliability ratings among 19 different server OS platforms – including other Unix variants, Microsoft’s Windows Server, Linux distributions and Apple’s Mac OS X.

Over three-quarters or 78 percent of survey respondents indicated they experienced less than one of the most common, minor Tier 1 incidents per server, per annum on IBM’s AIX v. 5.3 and AIX v 7.1 distributions
Those are the results of the ITIC 2010-2011 Global Server Hardware and OS Reliability Survey. ITIC partnered with GFI Software (formerly Sunbelt Software) to conduct this independent Web-based survey. It polled C-level executives and IT managers at 468 corporations from 23 countries worldwide from November through January.

The survey data indicated that the reliability and uptime of all the major server OS and server hardware distributions has improved significantly over the past several years. …

IBM, Stratus, HP, Fujitsu Top ITIC/GFI Software Hardware Reliability Survey Read More »

Cloud Computing: Pros and Cons

Cloud computing like any emerging new technology has both advantages and disadvantages. Before beginning any infrastructure upgrade or migration, organizations are well advised to first perform a thorough inventory and review of their existing legacy infrastructure and make the necessary upgrades, revisions and modifications. Next, the organization should determine its business goals for the next three-to-five years to determine when, if and what type of cloud infrastructure to adopt. It should also construct an operational and capital expenditure budget and a timeframe that includes research, planning, testing, evaluation and final rollout.
Public Clouds: Advantages and disadvantages
The biggest allure of a public cloud infrastructure over traditional premises-based network infrastructures is the ability to offload the tedious and time consuming management chores to a third party. This in turn can help businesses:
• Shave precious capital expenditure monies because they avoid the expensive investment in new equipment including hardware, software, and applications as well as the attendant configuration planning and provisioning that accompanies any new technology rollout.
• Accelerated deployment timetable. Having an experienced third party cloud services provider do all the work also accelerates the deployment timetable and most likely means less time spent on trial and error.
• Construct a flexible, scalable cloud infrastructure that is tailored to their business needs. A company that has performed its due diligence and is working with an experienced cloud provider can architect a cloud infrastructure that will scale up or down according to the organization’s business and technical needs and budget.
The potential downside of a public cloud is that the business is essentially renting common space with other customers. As such, depending on the resources of the particular cloud model, there exists the potential for performance, latency and security issues as well as acceptable response and service and support from the cloud provider.
Risk is another potential pitfall associated with outsourcing any of your firm’s resources and services to a third party. To mitigate risk and lower it to an acceptable level, it’s essential that organizations choose a reputable, experienced third party cloud services provider very carefully. Ask for customer references; check their financial viability. Don’t sign up with a service provider whose finances are tenuous and who might not be in business two or three years from now.
The cloud services provider must work closely and transparently with the corporation to build a cloud infrastructure that best suits the business’ budget, technology and business goals.
To ensure that the expectations of both parties are met, organizations should create a checklist of the items and issues that are of crucial importance to their business and incorporate them into Service Level Agreements (SLAs) Be as specific as possible. These should include but are not limited to:

• What types of equipment do they use?
• How old is the server hardware? Is the configuration powerful enough?
• How often is the data center equipment/infrastructure upgraded?
• How much bandwidth does the provider have?
• Does the service provider use open standards or is it a proprietary datacenter?
• How many customers will you be sharing data; resources with?
• Where is the cloud services provider’s datacenter physically located?
• What specific guarantees if any, will it provide for securing sensitive data?
• What level of guaranteed response time will it provide for service and support?
• What is the minimum acceptable latency/response time for its cloud services?
• Will it provide multiple access points to and from the cloud infrastructure?
• What specific provisions will apply to Service Level Agreements (SLAs)?
• How will financial remuneration for SLA violations be determined?
• What are the capacity ceilings for the service infrastructure?
• What provisions will there be for service failures and disruptions?
• How are upgrade and maintenance provisions defined?
• What are the costs over the term of the contract agreement?
• How much will the costs rise over the term of the contract?
• Does the cloud service provider use the Secure Sockets Layer (SSL) to transmit data?
• Does the cloud services provider encrypt the resting data to prohibit and restrict access?
• How often does the cloud services provider perform audits?
• What mechanisms will it use to quickly shut down a hack and can it track a hacker?
• If your cloud services provider is located outside your country of origin, what are the privacy and security rules of that country and what impact will that have on your firm’s privacy and security issues?
Finally, the corporation should appoint a liaison and that person should meet regularly with a representative from the cloud services provider to ensure that the company attains its immediate goals and that it is always aware and working on future technology and business goals. Outsourcing all or any part of your infrastructure to a public cloud does not mean forgetting and abandoning it.
Private Clouds: Advantages and Disadvantages
The biggest advantage of a private cloud infrastructure is that your organization keeps control of its corporate assets and can safeguard and preserve its privacy and security. Your organization is in command of its own destiny. That can be a double-edged sword.
Before committing to build a private cloud model the organization must do a thorough assessment of its current infrastructure, its budget and the expertise and preparedness of its IT department. Is your firm ready to assume the responsibility for such a large burden from both a technical and ongoing operational standpoint? Only you can answer that. Remember that the private cloud should be highly reliable and highly available – at least 99.999% uptime with built-in redundancy and failover capabilities. Many organizations currently struggle to maintain 99.9% uptime and reliability which is the equivalent of 8.76 hours of per server, per annum downtime. When your private cloud is down for any length of time, your end users (and anyone else who has access to the cloud) will be unable to access resources.
Realistically, in order for an organization to successfully implement and maintain a private cloud, it needs the following:
• Robust equipment that can handle the workloads efficiently during peak usage times
• An experienced, trained IT staff that is familiar with all aspects of virtualization, virtualization management, grid, utility and chargeback computing models
• An adequate capital expenditure and operational expenditure budget
• The right set of private cloud product offerings and service agreements
• Appropriate third party virtualization and management tools to support the private cloud
• Specific SLA agreements with vendors, suppliers and business partners
• Operational level agreements (OLAs) to ensure that each person within the organization is responsible for specific routine tasks and in the event of an outage
• A disaster recovery and backup strategy
• Strong security products and policies
• Efficient chargeback utilities, policies and procedures
Other potential private cloud pitfalls include: deciding which applications to virtualize; vendor lock-in and integration and interoperability issues. Businesses grapple with these same issues today in their existing environments. At present, however, the product choices from vendors and third party providers are more limited for virtualized private cloud offerings. Additionally, since the technology is still relatively new, it will be difficult from both a financial as well as technical standpoint to switch horses in midstream from one cloud provider to another if you encounter difficulties.
There is no doubt that virtualized public and private cloud infrastructures adoptions will grow significantly in the next 12 to 18 months. In order to capitalize on their benefits, lower your total cost of ownership (TCO), accelerate return on investment (ROI) and mitigate risk your organization should take its time and do it right.

Cloud Computing: Pros and Cons Read More »

Virtualization Deployments Soar, But Companies Prefer Terra Firma to Cloud for now

The ongoing buzz surrounding cloud computing – particularly public clouds – is far outpacing actual deployments by mainstream users. To date only 14% of companies have deployed or plan to deploy a private cloud infrastructure within the next two calendar quarters.
Instead, as businesses slowly recover from the ongoing economic downturn, their most immediate priorities are to upgrades to legacy desktop and server hardware, outmoded applications and to expand their virtualization deployments. Those are the results of the latest ITIC 2010 Virtualization and High Availability survey which polled C-level executives and IT managers at 400 organizations worldwide.
ITIC partnered with Stratus Technologies and Sunbelt Software to conduct the Web-based survey of multiple choice questions and essay comments. ITIC also conducted first person interviews with over two dozen end to obtain anecdotal responses on the primary accelerators or impediments to virtualization, high availability and reliability, cloud computing. The survey also queried customers on whether or not their current network infrastructure and mission critical applications were adequate enough to handle new technologies and the increasing demands of the business.
The survey showed that for now at least, although, many midsized and large enterprises are contemplating a move to the cloud – especially a private cloud infrastructure – the technology and business model is still not essential for most businesses. Some 48% of survey participants said they have no plans to migrate to private cloud architecture within the next 12 months while another 33% said their companies are studying the issue but have no firm plans to deploy.

The study also indicates that Private Cloud deployments are outpacing Public Cloud Infrastructure deployments by a 2 to 1 margin. However before businesses can begin to consider a private cloud deployment they must first upgrade the “building block” components of their existing environments e.g., server and desktop hardware, WAN infrastructure; storage, security and applications. Only 11% of businesses described their server and desktop hardware as leading edge or state-of-the-art. And just 8% of respondents characterized their desktop and application environment as leading edge.

The largest proportion of the survey participants – 52% – described their desktop and server hardware working well, while 48% said their applications were up-to-date. However, 34% acknowledged that some of their server hardware needed to be updated. A higher percentage of users 41% admitted that their mission critical software applications were due to be refreshed. And a small 3% minority said that a significant portion of both their hardware and mission critical applications were outmoded and adversely impacting the performance and reliability of their networks.

Based on the survey data and customer interviews, ITIC anticipates that from now until October, companies’ primary focus will be on infrastructure improvements.

Reliability and Uptime Lag

The biggest surprise in this survey from the 2009 High Availability and Fault Tolerant survey, which ITIC & Stratus conducted nearly one year ago, was the decline in the number of survey participants who said their organizations required 99.99% uptime and reliability. In this latest survey, the largest portion of respondents – 38% — or nearly 4 out of 10 businesses said that 99.9% uptime — the equivalent of 8.76 hours of per server, per annum downtime was the minimum acceptable amount for their mission critical line of business (LOB) applications. This is more than three times the 12% of respondents who said that 99.9% uptime was acceptable in the prior 2009 survey. Overall, 62% or nearly two-thirds of survey participants indicated their organizations are willing to live with higher levels of downtime than were considered acceptable in previous years.
Some 39% of survey respondents – almost 4 out of 10 respondents indicated that their organizations demand high availability which ITIC defines as four nines of uptime or greater. Specifically, 27% said their organizations require 99.99% uptime; another 6% need 99.999% uptime and a 3% minority require the highest 99.999% level of availability.
The customer interviews found that the ongoing economic downturn, aged/aging network infrastructures (server and desktop hardware and older applications), layoffs, hiring freezes and the new standard operating procedure (SOP) “do more with less” has made 99.9% uptime more palatable than in previous years.
Those firms that do not keep track of the number and severity of their outages have no way of gauging the financial and data losses to the business. Even a cursory comparison indicates substantial cost disparities between 99% uptime and 99.99% uptime. The monetary costs, business impact and risks associated with downtime will vary by company as well as the duration and severity of individual outage incidents. However a small or midsize business, for example, which estimates the hourly cost of downtime to be a very conservative $10,000 per hour, would potentially incur losses of $876,000 per year at a data center with 99% application availability (87 hours downtime). By contrast, a company whose data center operations has 99.99% uptime, would incur losses of $87,600 or one-tenth that of a firm with conventional 99% availability.
Ironically, the need for rock-solid network reliability has never been greater. The rise of Web-based applications and new technologies like virtualization and Service Oriented Architecture (SOA), as well as the emergence of public or shared cloud computing models are designed to maximize productivity. But without the proper safeguards these new datacenter paradigms may raise the risk of downtime. The Association for Computer Operations Management/ Data Center Institute (AFCOM) forecasts that one-in-four data centers will experience a serious business disruption over the next five years.
At the same time, customer interviews revealed that over half of all businesses 56% lack the budget for high availability technology. Another ongoing challenge is that 78% of survey participants acknowledged that their companies either lack the skills or simply do not attempt to quantify the monetary and business costs associated with hourly downtime. The reasons for this are well documented. Some organizations don’t routinely do this and those that attempt to calculate costs and damages run into difficulties collecting data because the data resides with many individuals across the enterprise. Inter-departmental communication, cooperation and collaboration is sorely lacking at many firms. Only 22% of survey respondents were able assign a specific cost to one hour of downtime and most of them gave conservative estimates of $1,000 to $25,000 for a one hour network outage. Only 13% of the 22% of survey participants who were able to quantify the cost of downtime indicated that their hourly losses would top $175,000 or more.

Users Confident and Committed to Virtualization Technology
The news was more upbeat with respect to virtualization – especially server virtualization deployments. Organizations are both confident and comfortable with virtualization technology.
72% of respondents indicated the number of desktop and server-based applications demanding high availability has increased over the past two years. The survey also found that a 77% majority of participants run business critical applications on virtual machines. Not surprisingly, the survey data showed that virtualization usage will continue to expand over the next 12 months. A 79% majority – approximately eight-out-of-10 respondents — said the number of business critical applications running on virtual machines and virtual desktops will increase significantly over the next year. Server virtualization is very much a mainstream and accepted technology. The responses to this question indicate increased adoption as well as confidence. Nearly one-quarter of the respondents – 24% say that more than 75% of their production servers are VMs. Overall 44% of respondents say than over 50% of their servers are VMs. However, none of the survey participants indicate that 100% of their servers are virtualized. Additionally, only 6% of survey resp

Virtualization Deployments Soar, But Companies Prefer Terra Firma to Cloud for now Read More »

ITIC 2009 Global Server Hardware & Server OS Reliability Survey Results

For the second year in a row, IBM AIX UNIX running on the Power or “P” series servers, scored the highest reliability ratings among 15 different server operating system platforms – including Linux, Mac OS X, UNIX and Windows.

Those are the results of the ITIC 2009 Global Server Hardware and Server OS Reliability Survey which polled C-level executives and IT managers at 400 corporations from 20 countries worldwide. The results indicate that the IBM AIX operating system whether running on Big Blue’s Power servers (System p5s)  is the clear winner, offering rock solid reliability. The IBM servers running AIX consistently score at least 99.99% or just 15 minutes of unplanned per server, per annum downtime.

Overall, the results showed improvements in reliability, patch management procedures and an across-the-board reduction in per server, per annum Tier 1, Tier 2 and the most severe Tier 3 outages.  Among the other survey highlights:

  • IBM leads all vendors for both server hardware and server OS reliability as well as the fewest number of Tier 1, Tier 2 and Tier 3 unplanned server outages per year. IBM AIX running on the System p5s had less than one unplanned outage incident per server in a 12 month period. More impressively, the IBM servers experience no Tier 3 outages. Tier 3 outages are the most severe and usually involve more than four hours or a half-day worth of downtime and can also result in lost data.
  • HP UX also performed well though HP servers notch approximately 25 minutes more downtime than IBM servers, depending on model and configuration – or just under 40 minutes per server, per annum downtime.
  • IT managers spend approximately 11minutes to apply patches to IBM servers running the AIX operating system, which is again, the least amount of time spent patching any server or operating system. The open source Ubuntu distribution is a close second with IT managers spending 12 minutes to apply patches, while IT managers in the Apple Mac OS X 10.x. Novell SuSE and customized Linux distribution environments each spend 15 to 19 minutes applying patches.
  • IBM also took top honors in another important category: IBM Power servers and AIX experience the lowest amount of the more severe Tier 2 and Tier 3 outages combined of any server hardware or server operating system. The combined total of Tier 2 and Tier 3 outages accounted for just 19% of all per server, per annum failures.
  • Microsoft Windows Server 2003 and Windows Server 2008 showed the biggest improvements of any of the vendors. The Windows Server 2003 and 2008 operating systems running on Intel-based platforms saw a 35% reduction in the amount of unplanned per server, per annum downtime from 3.77 hours in 2008 to 2.42 hours in 2009. The number of annual Windows Server Tier 3 outages also decreased by 31% year over year and the time spent applying patches similarly decline by 35% from last year to 32 minutes in 2009.
  • This year’s survey for the first time, also incorporated reliability results for the Apple Mac and OS X 10.x OS platform.  The survey respondents indicated that Apple products are extremely competitive in an enterprise setting. IT managers spend approximately 15 minutes per server to apply patches and Apple Macs recorded just under 40 minutes of per server, per annum downtime.

ITIC 2009 Global Server Hardware & Server OS Reliability Survey Results Read More »

Corporations Prefer Terra Firma to the Cloud — For Now

Concerns about cloud computing security and how fast cloud providers will respond in the event technical troubles should arise is making companies hesitant to embrace cloud computing — at least within the next 12 months. An 85% majority of the IT Performance Trends survey subjects say they will not implement a public or private cloud between June 2009 and June 2010. However, of that 85%, 31% say they are studying the issue but have made no decision yet and another 7% are “Unsure.”

Security topped the list of concerns and guarantees that companies would demand from a cloud services provider, if their firms were to implement a cloud model. An overwhelming 83% of respondents said they would need specific guarantees to safeguard their sensitive mission critical data before committing to a cloud. Additionally, almost three-quarters or 73% of respondents would require guaranteed fast response time for technical service and support. Nearly two thirds (63%) of respondents want minimum acceptable latency/response times and a nearly equal number (62%) say they would need multiple access paths to and from the cloud infrastructure.

It was clear from the customer interviews and essay responses that IT managers, especially those companies with fewer than 1,000 end users, will keep their corporate data and applications firmly planted behind the corporate firewall until they have ironclad assurances regarding the security of their data and their ability to access it.

“The idea that I would trust my email, financial transactions, or other day to day business operations to cloud computing is just asking for trouble,” observed an IT manager at a midsized corporation with 500 employees in the Midwest. “I do not even want to imagine my all my users being dead in the water because my link to the Internet was down,” he adds. Another manager at a retail firm with 250 employees expressed reservations about the ability of a cloud services vendor to deliver top notch service and support should the need arise.

“Downtime is the bane of an IT professional’s life,” says the network administrator at a retail firm with 250 employees. He noted that when an onsite and locally managed system fails, he and his IT team can take immediate action to replace parts, rebuild the operating system, restore data from tape backup or perform any other action required to restore services and applications. “Compare that to a failure in a cloud computing scenario, when all you can do is report the problem and hurry up and wait,” he says. “Most IT people are action oriented and they won’t respond well to being at the mercy of a cloud provider while listening to complaints and queries from users and management of ‘When will the system be back up?’ or ‘When can I get access to my data?'”

The director of IT at another midsized company with 400 users opined that he does not yet have confidence in the still-emerging cloud computing model. “We own our data, not the cloud provider, and we need to know it is movable if we need to leave the provider.”

Finally, the survey respondents indicated during first person customer interviews that they will continue to chart a conservative course that includes a very low tolerance for risk until the economy recovers and their companies can once again bolster IT staffs and provide more resources.

Analysis

Cloud computing is still in its nascent stages. It’s common for the hype among vendors, the press and analyst community to outpace current realities in IT, especially in the small and midsized businesses who have smaller budgets and are generally more conservative and risk averse than their enterprise counterparts.

The survey results also showed that there was much more of willingness on the part of larger enterprises to explore, test and deploy a cloud infrastructure. Among corporations with over 3,000 end users, a more convincing 57% percentage said they will either deploy or are considering a public or private cloud implementation over the next 12 to 18 months. Even this group though, is rightfully concerned about the uncertainties of trusting their sensitive data to a public cloud whose provider may be located in a foreign country.

Therefore, it is imperative that cloud computing vendors provide customers and prospective customers with transparency and full accountability with respect to crucial issues like: security, technical service and support, equipment and capacity of their data centers; an overview of the technology used (e.g. specific server equipment, virtualization, management, etc.). The vendors should also provide specific SLA levels and guarantees in the event those levels are not met.

Corporations should also perform due diligence. Get informed. Thoroughly investigate and compare the services and options of the various cloud providers. Know where and how your data will be stored, secured and managed. Ask for customer references. Consult with your in-house attorneys or obtain outside counsel to review proposed contracts. Don’t be afraid to insert out clauses and penalties in the event your cloud provider fails to meet SLAs. Also, at this early stage of development, don’t be afraid to ask for discounts and caps on prices hikes for the duration of your contract.

Corporations Prefer Terra Firma to the Cloud — For Now Read More »

Scroll to Top