Laura’s Insights ITIC Blog

Happy 1st Birthday Windows 7; Now Can We Please Cancel Microsoft’s MidLife Crisis?

Windows 7 is now officially a year old. Since it was released October 22, 2009, Microsoft has sold over 240 million copies of the operating system — approximately seven copies per second. That makes it the fastest selling operating system in Microsoft’s history or any vendor’s history. Some industry pundits estimate that Windows 7 sales will top 300 million within the next six-to-eight months.

Microsoft has plenty of other reasons to celebrate Windows 7’s first birthday. Windows 7 has also been one of the most stable, reliable and secure releases in Microsoft’s history.

A three-quarters majority – 73 percent of the 400+ respondents to the latest joint ITIC/Sunbelt Software poll, gave Windows 7 an “excellent,” “very good” or “good” rating. …

Happy 1st Birthday Windows 7; Now Can We Please Cancel Microsoft’s MidLife Crisis? Read More »

Oracle & HP Appear to Have Made Up But They’re Gearing up for Battle

“When two elephants fight, it is the grass that gets trampled.”

— African proverb

Hewlett-Packard Co. and Oracle Corp.’s decision to settle the lawsuit over Oracle’s hiring of Mark Hurd as co-President after weeks of public wrangling is welcome news to everyone but the corporate attorneys.

But don’t expect the two vendors to just pick up and resume their former close partnership. It got very ugly, very fast. And the reverberations from Hurd’s hiring to HP’s recent appointment of Leo Apotheker, as the new CEO effective November 1, will be felt for a long time. HP’s decision to hire the German-born Apotheker, who is also the former CEO of SAP, is to put it politely a big “take that, Oracle!” Forget the surface smiles, behind the scenes Oracle and HP have their ears pinned back, teeth bared and swords sharpened as they gird for battle.

This was not the typical cross-competitive carping that vendors routinely spew to denigrate their rivals’ products and strategies. The issues between HP and Oracle are very personal and very deep. The verbal volleys Oracle CEO Larry Ellison lobbed at HP in recent weeks exposed the changing nature of this decades old alliance. It is morphing from a close, mutually beneficial collaboration to a head-on collision in several key product areas. Ellison’s words did more than just wound HP: they also opened up deep fissures in the relationship which are as big as the San Andreas Fault. …

Oracle & HP Appear to Have Made Up But They’re Gearing up for Battle Read More »

SQL Server Most Secure Database; Oracle Least Secure Database Since 2002

Ask any 10 qualified people to guess which of the major database platforms is the most secure and chances are at least half would say Oracle. That is incorrect.

The correct answer is Microsoft’s SQL Server. In fact, the Oracle database has recorded the most number of security vulnerabilities of any of the major database platforms over the last eight years.

This is not a subjective statement. The data comes directly from the National Institute of Standards and Technology.

Since 2002, Microsoft’s SQL Server has compiled an enviable record. It is the most secure of any of the major database platforms. SQL Server has recorded the fewest number of reported vulnerabilities — just 49 from 2002 through June 2010 — of any database. These statistics were compiled independently by the National Institute of Standards and Technology (NIST), the government agency that monitors security vulnerabilities by technology, vendor, and product (see Exhibit 1). So far in 2010, through June, SQL Server has a perfect record — no security bugs have been recorded by NIST CVE.

And SQL Server was the most secure database by a wide margin: Its closest competitor, MySQL (which was owned by Sun Microsystems until its January 2010 acquisition by Oracle) recorded 98 security flaws or twice as many as SQL Server.

By contrast, during the same eight-and-a-half year period spanning 2002 through June 2010, the NIST CVE recorded 321 security vulnerabilities associated with the Oracle database platform, the highest total of any major vendor. Oracle had more than six times as many reported security flaws as SQL Server during the same time span. NIST CVE statistics recorded 121 security-related issues for the IBM DB2 platform during the past eight-and-a-half years.

Solid security is an essential element for many mainstream line-of-business (LOB) applications, and a crucial cornerstone in the foundation of every organization’s network infrastructure. Databases are the information repositories for many organizations; they contain much of the sensitive corporate data and intellectual property. If database security is compromised, the entire business is potentially at risk.

SQL Server’s unmatched security record is no fluke. It is the direct result of significant Microsoft investment in its Trustworthy Computing Initiative, which the company launched in 2002. In January of that year, Microsoft took the step of halting all new code development for several months across its product lines to scrub the code base and make its products more secure.

The strategy is working. In the past 21 months since January 2009, Microsoft has issued only eight (8) SQL Server security-related alerts. To date in 2010 (January through June), there have been no SQL Server vulnerabilities recorded by Microsoft or NIST. Microsoft is the only database vendor with a spotless security record the first six months of 2010.

ITIC conducted an independent Web-based survey on SQL Server security that polled 400 companies worldwide during May and June 2010. The results of the ITIC 2010 SQL Server Security survey support the NIST CVE findings. Among the survey highlights:
• An 83% majority rated SQL Server security “excellent” or “very good” (see Exhibit 2, below).
• None of the 400 survey respondents gave SQL Server security a “poor” or “unsatisfactory” rating.
• A 97% majority of survey participants said they experienced no inherent security issues with SQL Server.
• Anecdotal data obtained during first-person customer interviews also elicited a very high level of satisfaction with the embedded security functions and capabilities of SQL Server 7, SQL Server 2000, SQL Server 2005, SQL Server 2008, and the newest SQL Server 2008 R2 release. In fact, database administrators, CIOs and CTOs interviewed by ITIC expressed their approbation with Microsoft’s ongoing initiatives to improve SQL Server’s overall security and functionality during the last decade starting with SQL Server 2000.

Strong security is a must for every organization irrespective of size or vertical industry. Databases are among the most crucial applications in the entire network infrastructure. Information in databases is the organization’s intellectual property and life blood.

Databases are essentially a company’s electronic filing system. The information contained in the database directly influences and impacts every aspect of the organization’s daily operations including relationships with customers, business partners, suppliers and its own internal end users. All of these users must have the ability to quickly, efficiently and securely locate and access data. The database platform must be secure. An insecure, porous database platform will almost certainly compromise business operations and by association, any firm that does business with it. Any lapses in database security, including deliberate internal and external hacks, inadvertent misconfiguration, or user errors can mean lost or damaged data, lost revenue, and damage to the company’s reputation, raising the potential for litigation and loss of business.

It’s also true that organizations bear at least 50 percent of the responsibility for keeping their databases and their entire network infrastructures secure. As the old proverb goes, “The chain is only as secure as its weakest link.” Even the strongest security can be undone or bypassed by user error, misconfiguration or weak computer security practices. No database or network is 100 percent hack-proof or impregnable.Organizations should consult with their vendors regarding any questions and concerns they may have about the security of ANY of their database platforms. They should also ensure they stay updated with the latest patches and install the necessary updates. Above all, bolster the inherent security of your databases with the appropriate third party security tools and applications. Make sure your organization strictly adheres to best computer security computing practices. At the end of the day only you can defend your data.

Registered ITIC site users can Email me at: ldidio@itic-corp.com for a copy of the full report.

SQL Server Most Secure Database; Oracle Least Secure Database Since 2002 Read More »

The Dog Days of Summer & High Tech Hijinks

In the mid-to-late 1980s colleagues and friends were surprised when I transitioned from working as an on camera investigative TV reporter to cover the then-fledgling high technology industry for specialized trade magazines.
After all they reasoned, how could I be content covering semiconductors, memory boards, server hardware, software and computer networks after working as a mainstream journalist covering stories such as lurid political and law enforcement corruption scandals ; drug trafficking; prostitution; dumping tainted substances on unsuspecting third world nations and cover-ups by big business when their planes, trains and automobiles malfunctioned? How could I trade in “murder and mayhem” for the staid, sterile world of high technology?
They needn’t have worried.
Admittedly, mastering the technology was a challenge. For the first few weeks every time I did story on PALs and had to spell out the acronym I wrote “Police Athletic League” instead of Programmable Array Logic. And then there was my first work-related trip to Las Vegas to cover the mammoth spectacle that was Comdex circa 1988. In the dark ages before wireless, laptops and decent broadband, it was nearly impossible to file stories from your hotel room because the trunk lines were overwhelmed. A colleague and I were forced to trek down to a bank of pay phones to transmit our news articles at 2:30 a.m. and were mistaken for hookers. The pay was arguably better than a journalist’s salary but we passed. Incidents like this made me feel close to my cops and crimes, murder and mayhem investigative TV roots.
I felt at home covering technology right away. Within a month, I was chronicling tales of high tech companies sending their top executives off to rehab for drug and alcohol addiction; there was a rash of top executives leaving established powerhouses like and taking top engineers and sales executives with them, which in turn precipitated a slew of theft of trade secrets and patent infringement lawsuits. Things really got interesting when Robert Morris, Jr. launched his now infamous Internet Worm; there were myriad other tales of sex scandals, involving corporate executives, board of director fights and coups, price fixing, hostile takeovers, corporate espionage and fiscal chicanery that entailed everything from embezzlement and theft to cooking the books .
Reality TV and the tabloids have nothing on high technology industry hijinks.
Fast forward to what’s making headlines during these “Dog Days” of summer 2010. The ancient Greeks and Romans believed that the dog days of summer (named after the constellation Sirius or Dog Star) lasted from late July to early September and hot weather foreshadowed evil doings. John Brady’s “Clavis Calendarium of 1813 describes it as “an evil time when the seas boiled, wine turned sour, dogs grew mad, and all creatures became languid, causing to man burning fevers, hysterics, and phrensies.” The recent spate of high tech headlines seems to bear that out. Here’s a sampling:
• The Hewlett-Packard board of directors abruptly fired CEO Mark Hurd, after allegations of sexual harassment surfaced.
• Oracle CEO Larry Ellison publicly blasted the HP board for firing Mark Hurd.
• Oracle sued Google for alleged patent and copyright infringement involving the use of Java intellectual property in Google’s mobile Android operating system.
• Google StreetView maps prompts privacy lawsuits and raids in several countries including South Korea
• Google releases version 6 of its Chrome web browser and vows to issue a stable new release every six weeks.
The headlines provide an accurate assessment of both the current state and the direction of the high tech industry. Four words say it all: sex, money, power and posturing. Let’s examine some of the stories in more detail.
The HP board of directors’ decision to fire CEO Mark Hurd after five years of stewardship remains cloaked in mystery. Hurd may or may not have been guilty of fudging expense reports and engaging in conduct not up to HP’s standards with Jodie Fisher, a contract HP “adviser” and sometime actress. In addition to being an adviser, Fisher also received $5,000 to attend HP events acting as a “meet and greet” hostess. Fisher, who retained the services of celebrity lawyer Gloria Allred, may or may not have been a victim of harassment. We don’t know for sure because all of the principals in this tableau are mum. Rumors are rife that the “real reason” the HP’s board may have shown Hurd the door is because: 1) he may have been more involved than was previously thought in the 2006 HP board of directors “pretexting” scandal. At that time, HP board members illegally spied on other board members to learn the source of news leaks and 2) Hurd was exceedingly unpopular with rank and file HP employees.
By all monetary measures, Hurd’s five year stint at HP was a resounding success. And for that, Hurd will walk away with a $40 to $50 million severance package. No one knows how much Fisher received, because Hurd and Fisher settled whatever transpired between them, privately. But it must be a pretty good sum, because Fisher issued a very upbeat and conciliatory statement saying she did not intend for Hurd to lose his job and wishes Hurd, his family and HP all the best. Thankfully, I read this on an empty stomach!
What’s wrong with this picture? Plenty.
The real victims here are HP’s rank and file employees, the American worker and sexual harassment victims – both men and women – who lack the clout to hire a Gloria Allred to rattle her saber for another 15 minutes of fame and a quick, inglorious settlement.
The average Joe and Jane worker have seen their ranks decimated with each new acquisition and round of layoffs. HP currently ranks number 9 on Fortune 500 list. In the past several years it has acquired Compaq, EDS, 3Com and Palm. Those mergers and acquisitions helped HP become the first high tech company to have annual revenues that exceed the $100 billion threshold. HP is also first in another category – albeit an unwelcome one: despite its stellar financial performance, over the last decade HP has cut more jobs (most of them here in the U.S.) than any other high tech firm. The head count stands at approximately 85,000.
So Mark Hurd gets $40 to $50 million and tens of thousands of HP’s American employees get shown the door.
Then there’s Ms. Fisher. I know nothing about the woman. One must presume if Hurd was willing to settle with her that her claim had some merit. However, as soon as I heard she was represented by Allred, I cringed. Allred has turned into a modern day Carrie Nation for the tabloid TV generation. In an age of instant and continual information via the Tabloids and the Web, publicity is the chief currency – the more salacious and lurid, the bigger the settlement. I phoned Allred’s office to inquire how many pro bono and non-celebrity sexual harassment cases she handles. I haven’t heard back yet and I’m not too hopeful.
The Equal Employment Opportunity Commission (EEOC) received 12,696 complaints of sexual harassment in the workplace – 16% of them by men. The EEOC says it recovered $51.5 million in monetary benefits for those nearly 13,000 workers. That’s probably just about what Mark Hurd, Jodie Fisher and Gloria Allred pocketed among the three of them. Nice work if you can get it.
That brings me to another prominent headline of the past couple of weeks: Oracle chief Larry Ellison, in an interview with the New York Times blasted the HP board for firing his longtime friend Mark Hurd. Ellison’s comments have all the credence of a professional athlete convicted of using steroids writing an editorial extolling the virtues of doping. Oracle, which completed its acquisition of Sun Microsystems earlier this year, is gearing up to axe up to one-third to one-half of Sun’s workforce of over 25,000. No one is sure exactly how many Oracle employees will be pink slipped but estimates range from 5,000 to as high as 10,000. Oracle disclosed in a recent government finding that it will take write off $825,000 in restructuring charges.
The question is will Larry Ellison make room for Mark Hurd at Oracle? He might. Hurd has a proven record of cutting costs, cutting people and thus delivering value to shareholders.
The real measure of a company’s success should not be measured by how many jobs it cuts by how many jobs it creates for the American worker.
Oracle also made headlines and flexed its muscles last week with the announcement that it is suing Internet search engine giant Google for allegedly infringing on the Java patents Oracle now owns as part of the Sun acquisition, that are used in Google’s mobile Android operating system. This is all about Oracle making a preemptive strike to try and contain Google in what’s shaping up to be a battle of high tech titans. Google’s Android OS runs on many of the major mobile phone platforms including Motorola and HTC Corp. The implications are enormous. Don’t expect this one will ever get to court. Neither firm wants to spend millions or expend precious corporate resources in a protracted legal battle, which would be detrimental to both sides. Expect them to settle. But we can also expect the acrimony between these two rivals to rise commensurately along with the stakes in the mobile market.
Google meanwhile engaged in some posturing of its own. The company released beta version 6 of its Google Chrome web browser. Google also says it will issue a stable new release of the browser every six weeks. This move is clearly designed as a challenge to Microsoft Internet Explorer, Mozilla Firefox and Apple Safari. While I applaud Google’s initiative and desire to retain its competitive edge, releasing a new version of its browser every six weeks is overkill. No matter how fast Google or any vendor makes its browser, the actual speeds are still determined by the user’s broadband. And frankly, the constant application upgrades to everyday packages like Adobe, WordPress and the various browsers are a nuisance. One can barely log on to an application without being hounded to upgrade to the latest version. It’s a major nuisance.
But these days, companies feel compelled to make an announcement just to keep their names in the headlines at all costs. There’s never a dull moment in the high tech industry, especially during the dog days of summer. I can’t wait to see what fall brings. If you have any ideas, Email me at: ldidio@itic-corp.com.

The Dog Days of Summer & High Tech Hijinks Read More »

Cloud Computing: Pros and Cons

Cloud computing like any emerging new technology has both advantages and disadvantages. Before beginning any infrastructure upgrade or migration, organizations are well advised to first perform a thorough inventory and review of their existing legacy infrastructure and make the necessary upgrades, revisions and modifications. Next, the organization should determine its business goals for the next three-to-five years to determine when, if and what type of cloud infrastructure to adopt. It should also construct an operational and capital expenditure budget and a timeframe that includes research, planning, testing, evaluation and final rollout.
Public Clouds: Advantages and disadvantages
The biggest allure of a public cloud infrastructure over traditional premises-based network infrastructures is the ability to offload the tedious and time consuming management chores to a third party. This in turn can help businesses:
• Shave precious capital expenditure monies because they avoid the expensive investment in new equipment including hardware, software, and applications as well as the attendant configuration planning and provisioning that accompanies any new technology rollout.
• Accelerated deployment timetable. Having an experienced third party cloud services provider do all the work also accelerates the deployment timetable and most likely means less time spent on trial and error.
• Construct a flexible, scalable cloud infrastructure that is tailored to their business needs. A company that has performed its due diligence and is working with an experienced cloud provider can architect a cloud infrastructure that will scale up or down according to the organization’s business and technical needs and budget.
The potential downside of a public cloud is that the business is essentially renting common space with other customers. As such, depending on the resources of the particular cloud model, there exists the potential for performance, latency and security issues as well as acceptable response and service and support from the cloud provider.
Risk is another potential pitfall associated with outsourcing any of your firm’s resources and services to a third party. To mitigate risk and lower it to an acceptable level, it’s essential that organizations choose a reputable, experienced third party cloud services provider very carefully. Ask for customer references; check their financial viability. Don’t sign up with a service provider whose finances are tenuous and who might not be in business two or three years from now.
The cloud services provider must work closely and transparently with the corporation to build a cloud infrastructure that best suits the business’ budget, technology and business goals.
To ensure that the expectations of both parties are met, organizations should create a checklist of the items and issues that are of crucial importance to their business and incorporate them into Service Level Agreements (SLAs) Be as specific as possible. These should include but are not limited to:

• What types of equipment do they use?
• How old is the server hardware? Is the configuration powerful enough?
• How often is the data center equipment/infrastructure upgraded?
• How much bandwidth does the provider have?
• Does the service provider use open standards or is it a proprietary datacenter?
• How many customers will you be sharing data; resources with?
• Where is the cloud services provider’s datacenter physically located?
• What specific guarantees if any, will it provide for securing sensitive data?
• What level of guaranteed response time will it provide for service and support?
• What is the minimum acceptable latency/response time for its cloud services?
• Will it provide multiple access points to and from the cloud infrastructure?
• What specific provisions will apply to Service Level Agreements (SLAs)?
• How will financial remuneration for SLA violations be determined?
• What are the capacity ceilings for the service infrastructure?
• What provisions will there be for service failures and disruptions?
• How are upgrade and maintenance provisions defined?
• What are the costs over the term of the contract agreement?
• How much will the costs rise over the term of the contract?
• Does the cloud service provider use the Secure Sockets Layer (SSL) to transmit data?
• Does the cloud services provider encrypt the resting data to prohibit and restrict access?
• How often does the cloud services provider perform audits?
• What mechanisms will it use to quickly shut down a hack and can it track a hacker?
• If your cloud services provider is located outside your country of origin, what are the privacy and security rules of that country and what impact will that have on your firm’s privacy and security issues?
Finally, the corporation should appoint a liaison and that person should meet regularly with a representative from the cloud services provider to ensure that the company attains its immediate goals and that it is always aware and working on future technology and business goals. Outsourcing all or any part of your infrastructure to a public cloud does not mean forgetting and abandoning it.
Private Clouds: Advantages and Disadvantages
The biggest advantage of a private cloud infrastructure is that your organization keeps control of its corporate assets and can safeguard and preserve its privacy and security. Your organization is in command of its own destiny. That can be a double-edged sword.
Before committing to build a private cloud model the organization must do a thorough assessment of its current infrastructure, its budget and the expertise and preparedness of its IT department. Is your firm ready to assume the responsibility for such a large burden from both a technical and ongoing operational standpoint? Only you can answer that. Remember that the private cloud should be highly reliable and highly available – at least 99.999% uptime with built-in redundancy and failover capabilities. Many organizations currently struggle to maintain 99.9% uptime and reliability which is the equivalent of 8.76 hours of per server, per annum downtime. When your private cloud is down for any length of time, your end users (and anyone else who has access to the cloud) will be unable to access resources.
Realistically, in order for an organization to successfully implement and maintain a private cloud, it needs the following:
• Robust equipment that can handle the workloads efficiently during peak usage times
• An experienced, trained IT staff that is familiar with all aspects of virtualization, virtualization management, grid, utility and chargeback computing models
• An adequate capital expenditure and operational expenditure budget
• The right set of private cloud product offerings and service agreements
• Appropriate third party virtualization and management tools to support the private cloud
• Specific SLA agreements with vendors, suppliers and business partners
• Operational level agreements (OLAs) to ensure that each person within the organization is responsible for specific routine tasks and in the event of an outage
• A disaster recovery and backup strategy
• Strong security products and policies
• Efficient chargeback utilities, policies and procedures
Other potential private cloud pitfalls include: deciding which applications to virtualize; vendor lock-in and integration and interoperability issues. Businesses grapple with these same issues today in their existing environments. At present, however, the product choices from vendors and third party providers are more limited for virtualized private cloud offerings. Additionally, since the technology is still relatively new, it will be difficult from both a financial as well as technical standpoint to switch horses in midstream from one cloud provider to another if you encounter difficulties.
There is no doubt that virtualized public and private cloud infrastructures adoptions will grow significantly in the next 12 to 18 months. In order to capitalize on their benefits, lower your total cost of ownership (TCO), accelerate return on investment (ROI) and mitigate risk your organization should take its time and do it right.

Cloud Computing: Pros and Cons Read More »

Cloud Computing: De-Mystifying the Cloud

Every year or so the high technology industry gets a new buzzword or experiences a paradigm shift which is hyped as “the next big thing.”
For the last 12 months or so, cloud computing has had that distinction. Anyone reading all the vendor-generated cloud computing press releases and associated news articles and blogs would conclude that corporations are building and deploying both private and public clouds in record breaking numbers. The reality is much more sobering. An ITIC independent Web-based survey that polled IT managers and C-level professionals at 700 organizations worldwide in January 2010, found that spending on cloud adoption was not a priority for the majority of survey participants during calendar 2010. In fact only 6 percent of participants said that private cloud spending was a priority this year and an even smaller 3 percent minority say that public cloud spending is a priority this year.
Those findings are buttressed by the latest joint ITIC/Sunbelt Software survey data (which is still live); it indicates that just under 20 percent of organizations have implemented a public or a private cloud. When asked why, nearly two-thirds or 65 percent of the respondents said they felt no compelling business need. Translation: they feel safe inside the confines of their current datacenters here on Terra Firma.

While there is a great deal of interest in the cloud infrastructure model, the majority of midsized and enterprise organizations are not rushing to install and deploy private or public clouds in 2010.

However, that is not to say that organizations – especially mid-sized and large enterprises – are not considering cloud implementations. ITIC research indicates that many businesses are more focused on performing much needed upgrades to such essentials as disaster recovery, desktop and server hardware, operating systems, applications, bandwidth and storage before turning their attention to new technologies like cloud computing.
Despite the many articles written about public and private cloud infrastructures over the past 18 months, many businesses remain confused about cloud specifics such as characteristics, costs, operational requirements, integration and interoperability with their existing environment or how to even get started.
De-Mystifying the Cloud
But just what is cloud computing, exactly? Definitions vary. The simplest, most straightforward definition is that a cloud is a grid or utility style pay-as-you-go computing model that uses the Web to deliver applications and services in real-time.
Organizations can choose to deploy a private cloud infrastructure wherein they host their services on-premises from behind the safety of the corporate firewall. The advantage here is that the IT department always knows what’s going on with all aspects of the corporate data from bandwidth, CPU utilization to all-important security issues. Alternatively, organizations can opt for a public cloud deployment in which a third party like Amazon Web Services (a division of Amazon.com) hosts the services at a remote location. This latter scenario saves businesses money and manpower hours by utilizing the host provider’s equipment and management. All that is needed is a Web browser and a high-speed Internet connection to connect to the host to access applications, services and data. However, the public cloud infrastructure is also a shared model in which corporate customers share bandwidth and space on the host’s servers.
Organizations that are extremely concerned about security and privacy issues and those that desire more control over their data can opt for a private cloud infrastructure in which the hosted services are delivered to the corporation’s end users from behind the safe confines of an internal corporate firewall. However, a private cloud is more than just a hosted services model that exists behind the confines of a firewall. Any discussion of private and/or public cloud infrastructure must also include virtualization. While most virtualized desktop, server, storage and network environments are not yet part of a cloud infrastructure, just about every private and public cloud will feature a virtualized environment.
Organizations contemplating a private cloud also need to ensure that they feature very high (near fault tolerant) availability with at least “five nines” 99.999% uptime or better. The private cloud should also be able to scale dynamically to accommodate the needs and demands of the users. And unlike most existing, traditional datacenters, the private cloud model should also incorporate a high degree of user-based resource provisioning. Ideally, the IT department should also be able to track resource usage in the private cloud by user, department or groups of users working on specific projects, for chargeback purposes.
Private clouds will also make extensive use of business intelligence and business process automation to guarantee that resources are available to the users on demand.
Given the Spartan economic conditions of the last two years, all but the most cash-rich organizations (and there are very few of those) will almost certainly have to upgrade their network infrastructure in advance of migrating to a private cloud environment. Organizations considering outsourcing any of their datacenter needs to a public cloud will also have to perform due diligence to determine the bona fides of their potential cloud service providers.
There are three basic types of cloud computing although the first two are the most prevalent. They are:
• Software as a Service (SaaS) which uses the Web to deliver software applications to the customer. Examples of this are Salesforce.com, which has one of the most popular, widely deployed, and the earliest cloud-based CRM application and Google Apps, which is experiencing solid growth. Google Apps comes in three editions – Standard, Education and Premier (the first two are free). It provides consumers and corporations with customizable versions of the company’s applications like Google Mail, Google Docs and Calendar.
• Platform as a Service (PaaS) offerings; examples of this include the above-mentioned Amazon Web Services and Microsoft’s nascent Windows Azure Platform. The Microsoft Azure cloud platform offering contains all the elements of a traditional application stack from the operating system up to the applications and the development framework. It includes the Windows Azure Platform AppFabric (formerly .NET Services for Azure) as well as the SQL Azure Database service. Customers that build applications for Azure will host it in the cloud. However, it is not a multi-tenant architecture meant to host your entire infrastructure. With Azure, businesses will rent resources that will reside in Microsoft datacenters. The costs are based on a per usage model. This gives customers the flexibility to rent fewer or more resources depending on their business needs.
• Infrastructure as a Service (IaaS) is exactly what its name implies: the entire infrastructure becomes a multi-tiered hosted cloud model and delivery mechanism.
Both public and private clouds should be flexible and agile: the resources should be available on demand and should be able to scale up or scale back as the businesses’ needs dictate.

Next: In Part 2 The Pros and Cons of the Cloud

Cloud Computing: De-Mystifying the Cloud Read More »

Apple, Google Grapple for Top Spot in Mobile Web

Since January, the high technology industry has witnessed a dizzying spate of dueling, vendor product announcements.
So what else is new? It’s standard operating procedure for vendors to regularly issue hyperbolic proclamations about their latest/greatest offering, even (or especially) when the announcements are as devoid of content as cotton candy is of nutritional value. Maybe it’s just an outgrowth of the digital information age. We live and breathe instant information that circumnavigates the globe faster than you can say Magellan; the copy monster must be fed constantly. Or maybe it’s the protracted economic downturn which is making vendors hungrier than ever for consumer and corporate dollars.
Whatever the reason, there’s no doubt that high technology vendors – led by Google and Apple – are engaged in a near constant game of one-upmanship.
Apple indirectly started this trend in early January, when word began leaking out that Apple would finally announce the long-rumored iPad tablet in late January. The race was on among other tablet vendors to announce their products at the Consumer Electronics Show (CES) in Las Vegas in mid-January to beat Apple to the punch. A half-dozen vendors including, ASUSTeK Computer (ASUS), Dell, Hewlett-Packard, Lenovo, Taiwanese manufacturer Micro Star International (MSI) and Toshiba all raced to showcase their forthcoming wares in advance of Apple. It made good marketing sense: all of these vendors knew that once Apple released the iPad, that their chances of getting PR would be sorely diminished.
I have no problem with smaller vendors or even large vendors like Dell and HP, who rightfully reckon that they have to make their announcements in advance of a powerhouse like Apple to ensure that their products don’t get overlooked.
Apple vs. Google Battle of the Mobile Web Titans
But when the current industry giants and media darlings like Apple and Google start slugging it out online, in print and at various conferences, it’s overwhelming.
Apple and Google are just the latest in a long line of high technology rivalries. In the 1970s it was IBM vs. HP; in the 1980s, the rise of networking created several notable rivalries: IBM vs. Digital Equipment Corp. (DEC); IBM vs. Microsoft; Oracle vs. IBM; Novell vs. 3Com; Novell vs. Microsoft; Cabletron vs. Synoptics and Cisco vs. all the internetworking vendors. By the 1990s it was Microsoft vs. Netscape and Microsoft vs. pretty much everyone else.
The Apple vs. Google rivalry differs from earlier technology contests in that the relationship between the two firms began as a friendly one and to date, there has been no malice. Until August, 2009 Google CEO Eric Schmidt was on Apple’s board of directors. And while the competition between these two industry giants is noticeably devoid of the rancor that characterized past high tech rivalries, it’s safe to say that the two are respectfully wary of each other. Apple and Google are both determined not to let the other one get the upper hand, something they fear will happen if there is even the slightest pause in the endless stream of headlines.
Google and Apple started out in different markets – Google in the online search engine and advertising arena and Apple as a manufacturer of consumer hardware devices and software applications. Their respective successes – Apple’s with its Mac hardware and Google’s with its search engine of the same name have led them to this point: a head to head rivalry in the battle for supremacy of the mobile Web arena.
On paper, they appear to be two equally matched gladiators. Both companies have huge amounts of cash. Apple has $23 billion in the bank and now boasts the highest valuation of any high technology company, with a current market cap of $236.3 billion, surpassing Microsoft for the top spot. Google has $26.5 billion in cash and a valuation of $158.6 billion. Both firms have two of the strongest management and engineering teams in Silicon Valley. Apple has the iconic Steve Jobs who since his return has re-vitalized the company. Google is helmed by co-founders and creative geniuses Larry Page and Sergey Brin and since 2006 and Eric Schmidt, the CEO who knows how to build computers and make the trains run on time.
Fueling this rivalry is Apple’s and Google’s stake in mobile devices and operating systems. In Apple’s case this means the wildly successful iPhone, iPod Touch and most recently the iPad and the Mac Mini. Google’s lineup consists of its Chrome OS and Android OS which will power tablet devices like Dell’s newly announced Streak, Lenovo’s forthcoming U1 hybrid tablet/notebook due out later this year. The rivalry between the two is quite literally getting down to the chip level. Intel, which has for so long been identified with Microsoft’Windows-based PC platform is now expanding its support for Android – a move company executives have described as its “port of choice” gambit. Apple is no slouch in this area, either: its Macs – from the Mac Minis’ to the MacBook Pros, ship with Intel inside. Last week Nvidia CEO Jen-Hsun Huang weighed in on the Apple/Google rivalry on Google’s side, predicting that the tablet designs will converge around Google’s operating system.
But a stroll through any airport, mall, consumer home or office would give a person cause to dispute Huang’s claim: iPads and iPhones are everywhere. Apple recently announced that it has sold over two million iPads since the device first shipped in April. During a business trip from Boston to New Orleans last week I found that Apple iPads were as much in evidence as hot dogs at a ballpark.
Ironically, Microsoft, a longer term traditional rival of both Apple and Google is not mentioned nearly so often in the smart phone and tablet arenas. That’s because Microsoft’s Windows OS is still searching for a tablet to call its own. Longtime Microsoft partner HP, abruptly switched course: after Microsoft CEO Steve Ballmer got on stage and demonstrated Windows 7 running on HP’s slate, HP bought Palm and earlier this week acquired the assets of Phoenix Technologies which makes an operating system for tablets. That leaves Microsoft to promote its business centric Windows 7 phone which will run Xbox LIVE games, Zune music and the company’s Bing search engine. All is not lost for Microsoft: longtime “frenemy” Apple CEO Steve Jobs said recently that the new iPhone 4G will run Microsoft’s Bing fueling speculation that Apple will drop support for Google’s search engine. Both Google and Apple are still competing with Microsoft in other markets like operating systems, games and application software to name a few, but that’s another story.
There are other competitors in the smart phone and tablet markets but you’d hardly know it from the headlines. Research In Motion’s (RIM) Blackberry is still a market leader. But Apple and Google continue to dominate the coverage. I guess high technology just like sports revels in a classic rivalry. And this one promises to be a hard fought struggle.

Apple, Google Grapple for Top Spot in Mobile Web Read More »

Microsoft Azure Platform, BPOS Cloud Vision Must Address Licensing

Microsoft did a very credible job at its TechEd conference in New Orleans last week, laying out the technology roadmap and strategy for a smooth transition from premises-based networks/services to its emerging Azure cloud infrastructure and software + services model.

One of the biggest challenges facing Microsoft and its customers as it stands on the cusp of what Bob Muglia, president of Microsoft’s Server & Tools Business (STB) unit characterized as a “major transformation in the industry called cloud computing,” is how the Redmond, Wash. software giant will license its cloud offerings.

Licensing programs and plans—even those that involve seemingly straightforward and mature software, PC- and server-based product offerings—are challenging and complex in the best of circumstances. This is something Microsoft knows only too well from experience. Constructing an equitable, easy-to-understand licensing model for cloud-based services could prove to be one of the most daunting tasks on Microsoft’s Azure roadmap.

It is imperative that Microsoft proactively address the cloud licensing issues now, and Microsoft executives are well aware of this. During the Q&A portion of one cloud-related TechEd session, Robert Wahbe, corporate vice president, STB Marketing was asked, “What about licensing?” He took a sip from his water bottle and replied, “That’s a big question.”

That is an understatement.

Microsoft has continually grappled with simplifying and refining its licensing strategy since it made a major misstep with Licensing 6.0 in May, 2001, where the initial offering was complex, convoluted and potentially very expensive. It immediately met with a huge vocal outcry and backlash. The company was compelled to postpone the Licensing 6.0 launch while it re-tooled the program to make it more user-friendly from both a technical and cost perspective.

Over the last nine years, Microsoft’s licensing program and strategy has become one of the best in the high-technology industry. It offers simplified terms and conditions (T&Cs); greater discounts for even the smallest micro SMBs and a variety of add-on tools (e.g. licensing compliance and assessment utilities), as well as access to freebies, such as online and onsite technical service and training for customers who purchase the company’s Software Assurance (SA) maintenance and upgrade agreement along with their Volume Licensing deals.

Licensing from Premises to the Cloud
Microsoft’s cloud strategy is a multi-pronged approach that incorporates a wide array of offerings, including Windows Azure, SQL Azure and Microsoft Online Services (MOS). MOS consists of hosted versions of Microsoft’s most popular and widely deployed server applications, such as Exchange Server, PowerPoint and SharePoint. Microsoft’s cloud strategy also encompasses consumer products like Windows Live, Xbox Live and MSN.

Microsoft is also delivering a hybrid cloud infrastructure that will enable organizations to combine premises-based with hosted cloud solutions. This will indisputably provide Microsoft customers with flexibility and choice as they transition from a fixed-premises computing model to a hosted cloud model. In addition, it will allow them to migrate to the cloud at their own pace as their budgets and business needs dictate. However, the very flexibility, breadth and depth of offerings that make Microsoft products so appealing to customers, ironically, are the very issues that increase the complexity and challenges of creating an easily accessible, straightforward licensing model.

Dueling Microsoft Clouds: Azure vs. BPOS
Complicating matters is that Microsoft has dueling cloud offerings; the Business Productivity Online Suite (BPOS) and the Windows Azure Platform. As a result, Microsoft must also develop, delineate and differentiate its strategy, pricing and provisions for Azure and BPOS. It’s unclear (at least to this analyst) as to when and how a customer will choose one or mix and match BPOS and Azure offerings. Both are currently works in progress.

BPOS is a licensing suite and a set of collaborative end-user services that run on Windows Server, Exchange Server, and SQL Server. Microsoft offers the BPOS Standard Suite, which incorporates Exchange Online, SharePoint Online, Office Live Meeting, and Office Communications (OCS) Online. The availability of the latter two offerings is a key differentiator that distinguishes Microsoft’s BPOS and rival offerings from Google. Microsoft also sells the BPOS Business Productivity Online Deskless Worker Suite. It consists of Exchange Online Deskless Worker, SharePoint Online Deskless Worker and Outlook Web Access Light. This BPOS package is targeted at SMBs, small branch offices or companies that want basic, entry-level messaging and document collaboration functions.

By contrast, Azure is a cloud platform offering that contains all the elements of a traditional application stack from the operating system up to the applications and the development framework. It includes the Windows Azure Platform AppFabric (formerly .NET Services for Azure), as well as the SQL Azure Database service.

While BPOS is aimed squarely at end users and IT managers, Azure targets third-party ISVs and internal corporate developers. Customers that build applications for Azure will host it in the cloud. However, it is not a multi-tenant architecture meant to host your entire infrastructure. With Azure, businesses will rent resources that will reside in Microsoft datacenters. The costs are based on a per-usage model. This gives customers the flexibility to rent fewer or more resources, depending on their business needs.

Cloud Licensing Questions
Any cloud licensing or hybrid cloud licensing program that Microsoft develops must include all of the elements of its current fixed premises and virtualization models. This includes:

1. Volume Licensing: As the technology advances from fixed premises software and hardware offerings to private and public clouds, Microsoft must find ways to translate the elements of its current Open, Select and Enterprise agreements to address the broad spectrum of users from small and midsized (SMBs) companies to the largest enterprises with the associated discounts for volume purchases.
2. Term Length: The majority of volume license agreements are based on a three-year product lifecycle. During the protracted economic downturn, however, many companies could not afford to upgrade. A hosted cloud model, though, will be based on usage and consumption, so the terms should and most likely will vary.
3. Software Assurance: Organizations will still need upgrade and maintenance plans regardless of where their data resides and whether or not they have traditional subscription licensing or the newer consumption/usage model.
4. Service and Support: Provisions for after-market technical services, support and maintenance will be crucial for Microsoft, its users, resellers and OEM channel partners. ITIC survey data indicates that the breadth and depth of after-market technical service and support is among the top four items that make or break a purchasing deal.
5. Defined areas of responsibility and indemnification: This will require careful planning on Microsoft’s part. Existing premises-based licensing models differ according to whether or not the customer purchases their products directly from Microsoft, a reseller or an OEM hardware manufacturer. Organizations that adopt a hybrid premises/cloud offering and those that opt for an entirely hosted cloud offering will be looking more than ever before to Microsoft for guidance. Microsoft must be explicit as to what it will cover and what will be covered by OEM partners and/or host providers.

Complicating the cloud licensing models even further is the nature of the cloud itself. There is no singular cloud model. There may be multiple clouds, and they may be a mixture of public and private clouds that also link to fixed premises and mobile networks.

Among the cloud licensing questions that Microsoft must address and specifically answer in the coming months are:

• What specific pricing models and tiers for SMBs, midsize and enterprises will be based on a hybrid and full cloud infrastructures?
• What specific guarantees if any, will it provide for securing sensitive data?
• What level of guaranteed response time will it provide for service and support?
• What is the minimum acceptable latency/response time for its cloud services?
• Will it provide multiple access points to and from the cloud infrastructure?
• What specific provisions will apply to Service Level Agreements (SLAs)?
• How will financial remuneration for SLA violations be determined?
• What are the capacity ceilings for the service infrastructure?
• What provisions will there be for service failures and disruptions?
• How are upgrade and maintenance provisions defined?

From the keynote speeches and throughout the STB Summit and TechEd conference, Microsoft’s Muglia and Wahbe both emphasized and promoted the idea that there is no singular cloud. Instead, Microsoft’s vision is a world of multiple private, public and hybrid clouds that are built to individual organizations’ specific needs.

That’s all well and good. But in order for this strategy to succeed, Microsoft will have to take the lead on both the technology and the licensing fronts. The BPOS and Azure product managers and marketers should actively engage with the Worldwide Licensing Program (WWLP) managers and construct a simplified, straightforward licensing model. We recognize that this is much easier said than done. But customers need and will demand transparency in licensing pricing, models and T&Cs before committing to the Microsoft cloud.

Microsoft Azure Platform, BPOS Cloud Vision Must Address Licensing Read More »

Virtualization Deployments Soar, But Companies Prefer Terra Firma to Cloud for now

The ongoing buzz surrounding cloud computing – particularly public clouds – is far outpacing actual deployments by mainstream users. To date only 14% of companies have deployed or plan to deploy a private cloud infrastructure within the next two calendar quarters.
Instead, as businesses slowly recover from the ongoing economic downturn, their most immediate priorities are to upgrades to legacy desktop and server hardware, outmoded applications and to expand their virtualization deployments. Those are the results of the latest ITIC 2010 Virtualization and High Availability survey which polled C-level executives and IT managers at 400 organizations worldwide.
ITIC partnered with Stratus Technologies and Sunbelt Software to conduct the Web-based survey of multiple choice questions and essay comments. ITIC also conducted first person interviews with over two dozen end to obtain anecdotal responses on the primary accelerators or impediments to virtualization, high availability and reliability, cloud computing. The survey also queried customers on whether or not their current network infrastructure and mission critical applications were adequate enough to handle new technologies and the increasing demands of the business.
The survey showed that for now at least, although, many midsized and large enterprises are contemplating a move to the cloud – especially a private cloud infrastructure – the technology and business model is still not essential for most businesses. Some 48% of survey participants said they have no plans to migrate to private cloud architecture within the next 12 months while another 33% said their companies are studying the issue but have no firm plans to deploy.

The study also indicates that Private Cloud deployments are outpacing Public Cloud Infrastructure deployments by a 2 to 1 margin. However before businesses can begin to consider a private cloud deployment they must first upgrade the “building block” components of their existing environments e.g., server and desktop hardware, WAN infrastructure; storage, security and applications. Only 11% of businesses described their server and desktop hardware as leading edge or state-of-the-art. And just 8% of respondents characterized their desktop and application environment as leading edge.

The largest proportion of the survey participants – 52% – described their desktop and server hardware working well, while 48% said their applications were up-to-date. However, 34% acknowledged that some of their server hardware needed to be updated. A higher percentage of users 41% admitted that their mission critical software applications were due to be refreshed. And a small 3% minority said that a significant portion of both their hardware and mission critical applications were outmoded and adversely impacting the performance and reliability of their networks.

Based on the survey data and customer interviews, ITIC anticipates that from now until October, companies’ primary focus will be on infrastructure improvements.

Reliability and Uptime Lag

The biggest surprise in this survey from the 2009 High Availability and Fault Tolerant survey, which ITIC & Stratus conducted nearly one year ago, was the decline in the number of survey participants who said their organizations required 99.99% uptime and reliability. In this latest survey, the largest portion of respondents – 38% — or nearly 4 out of 10 businesses said that 99.9% uptime — the equivalent of 8.76 hours of per server, per annum downtime was the minimum acceptable amount for their mission critical line of business (LOB) applications. This is more than three times the 12% of respondents who said that 99.9% uptime was acceptable in the prior 2009 survey. Overall, 62% or nearly two-thirds of survey participants indicated their organizations are willing to live with higher levels of downtime than were considered acceptable in previous years.
Some 39% of survey respondents – almost 4 out of 10 respondents indicated that their organizations demand high availability which ITIC defines as four nines of uptime or greater. Specifically, 27% said their organizations require 99.99% uptime; another 6% need 99.999% uptime and a 3% minority require the highest 99.999% level of availability.
The customer interviews found that the ongoing economic downturn, aged/aging network infrastructures (server and desktop hardware and older applications), layoffs, hiring freezes and the new standard operating procedure (SOP) “do more with less” has made 99.9% uptime more palatable than in previous years.
Those firms that do not keep track of the number and severity of their outages have no way of gauging the financial and data losses to the business. Even a cursory comparison indicates substantial cost disparities between 99% uptime and 99.99% uptime. The monetary costs, business impact and risks associated with downtime will vary by company as well as the duration and severity of individual outage incidents. However a small or midsize business, for example, which estimates the hourly cost of downtime to be a very conservative $10,000 per hour, would potentially incur losses of $876,000 per year at a data center with 99% application availability (87 hours downtime). By contrast, a company whose data center operations has 99.99% uptime, would incur losses of $87,600 or one-tenth that of a firm with conventional 99% availability.
Ironically, the need for rock-solid network reliability has never been greater. The rise of Web-based applications and new technologies like virtualization and Service Oriented Architecture (SOA), as well as the emergence of public or shared cloud computing models are designed to maximize productivity. But without the proper safeguards these new datacenter paradigms may raise the risk of downtime. The Association for Computer Operations Management/ Data Center Institute (AFCOM) forecasts that one-in-four data centers will experience a serious business disruption over the next five years.
At the same time, customer interviews revealed that over half of all businesses 56% lack the budget for high availability technology. Another ongoing challenge is that 78% of survey participants acknowledged that their companies either lack the skills or simply do not attempt to quantify the monetary and business costs associated with hourly downtime. The reasons for this are well documented. Some organizations don’t routinely do this and those that attempt to calculate costs and damages run into difficulties collecting data because the data resides with many individuals across the enterprise. Inter-departmental communication, cooperation and collaboration is sorely lacking at many firms. Only 22% of survey respondents were able assign a specific cost to one hour of downtime and most of them gave conservative estimates of $1,000 to $25,000 for a one hour network outage. Only 13% of the 22% of survey participants who were able to quantify the cost of downtime indicated that their hourly losses would top $175,000 or more.

Users Confident and Committed to Virtualization Technology
The news was more upbeat with respect to virtualization – especially server virtualization deployments. Organizations are both confident and comfortable with virtualization technology.
72% of respondents indicated the number of desktop and server-based applications demanding high availability has increased over the past two years. The survey also found that a 77% majority of participants run business critical applications on virtual machines. Not surprisingly, the survey data showed that virtualization usage will continue to expand over the next 12 months. A 79% majority – approximately eight-out-of-10 respondents — said the number of business critical applications running on virtual machines and virtual desktops will increase significantly over the next year. Server virtualization is very much a mainstream and accepted technology. The responses to this question indicate increased adoption as well as confidence. Nearly one-quarter of the respondents – 24% say that more than 75% of their production servers are VMs. Overall 44% of respondents say than over 50% of their servers are VMs. However, none of the survey participants indicate that 100% of their servers are virtualized. Additionally, only 6% of survey resp

Virtualization Deployments Soar, But Companies Prefer Terra Firma to Cloud for now Read More »

Networks Without Borders Raise Security, Management Issues

“Networks without Borders” are rapidly becoming the rule rather than the exception.
The demand for all access all the time, along with the rapid rise in remote, telecommuting, part time and transient workers, has rendered network borders obsolete and made networks extremely porous. Today’s 21st Century networks more closely resemble sieves than citadels.
Gone are the days when employees and data resided safely behind the secure confines of the firewall, clocked in promptly at 9:00 a.m., sat stationary in front of their computers, never accessed the Internet, and logged off at 6:00 p.m. and were offline until the next workday.
Today’s workers are extremely mobile, always connected and demand 24×7 access to the corporate network, applications and data via a variety of device types from desktops to smart phones irrespective of location. ITIC survey data indicates that workers at 67% of all businesses worldwide travel telecommute and log in remotely at least several days a month. At present, one-out-of-eight employees use their personal computers, notebooks and smart phones to access corporate data.
From an internal perspective, the ongoing economic downturn has resulted in layoffs, hiring freezes, budget cuts and less money and time available for IT training and certification. At the same time, the corporate enterprise network and applications have become more complex. IT departments face increasing pressure to provide more services with fewer resources. Another recent ITIC survey of 400 businesses found that almost 50% of all businesses have had budget cuts and 42% have had hiring freezes. An overwhelming 84% majority of IT departments just pick up the slack and work longer hours!
External pressures also abound. Many businesses also have business partners, suppliers and customers who similarly require access. Additionally, many organizations employ outside consultants, temporary and transient workers who need access to the corporate network from beyond the secure confines of the firewall.
This type of on demand, dynamic access is distinctly at odds with traditional security models. The conventional approach to security takes a moat and drawbridge approach: to contain and lock down data behind the safety of the firewall. IT managers have been trained to limit access, rights and privileges particularly with respect to transient workers, outside consultants and remote and telecommuting workers. And who can blame them? The more network access that is allowed, the greater the risk of litigation, non-compliance and compromising the integrity of the corporate network and data.
Providing secure, ubiquitous access to an array of mobile and home-based employees, business partners, suppliers, customers and consultants who need permanent or temporary access to the network is a tedious and time consuming process. It necessitates constant vigilance on the part of the IT department to monitor and provision the correct access rights and privileges.
The conundrum for IT departments is to easily, quickly and cost effectively provision user account access while preserving security and maintaining licensing compliance. The emerging Virtual Desktop Infrastructure (VDI) technology, where users control a desktop running on a server remotely, can address some of these issues, but VDI doesn’t solve all the problems.
An intriguing alternative to VDI is nascent software application from MokaFive, which is designed specifically to plug the holes in the so-called “Porous Enterprise.” MokaFive, based in Redwood City, California was founded in 2005 by a group of Stanford University engineers specifically to enable IT departments to swiftly provision network access without the cost and complexity of VDI solutions. MokaFive is not the only vendor exploring this market; its’ competitors include VMware (via the Thinstall acquisition); Microsoft (via the Kidaro acquisition), LANDesk and Provision Networks. However, the MokaFive offering is to date, the only “pure play” offering that enables organizations to provision a secure desktop environment on the fly to individual users rather than just an entire group.
The MokaFive Suite is actually a set of Desktop-as-a-Service facilities that are operating system, hardware and application agnostic. MokaFive’s desktop management features enable IT administrators to centrally create, deliver, secure and update a fully-contained virtual environment, called a LivePC, to thousands of users. Contract workers can log on via Guest Access; there is no need for the IT department to specially provision them. The MokaFive Suite facilitates ubiquitous access to Email, data and applications irrespective of location, device type (e.g., Windows, and Macintosh) or the availability of a hard wired network connection.
I discussed the product with several IT executives and administrators who immediately and enthusiastically grasped the concept.
“This a very cool idea,” says Andrew Baker, a 20 year veteran VP of IT and security who has held those positions at a variety of firms including Bear Stearns, Warner Media Group and The Princeton Review. “The most tedious aspect of configuring a worker’s experience is the desktop,” he says. Typically the IT manager must physically configure the machine, set up the access rights, privileges and security policies and deploy the correct applications. This is especially problematic and time consuming given the increasing number of mobile workers and transient workforces. The other issue is the constant need to re-provision the desktop configuration to keep it up to date, Baker says. The MokaFive Suite, he says, “saves precious time and it solves the issue of the disappearing network perimeter. I love the idea of being able to be secure, platform agnostic and being able to support multiple classes of workers from a central location.”
MokaFive’s LivePC images run locally, so end-users simply download their secure virtual desktop via a Web link, and run it on any computer (Macintosh or Windows). IT administrators apply updates and patches to a single golden image and MokaFive distributes the differentials to each LivePC. The entire process is completed in minutes by a single IT administrator. Once the MokaFive LivePC link is up and published, users are up and running regardless of whether it’s one person or 100 people. The traditional method of physically provisioning an asset can involve several IT managers and take anywhere from two days to a couple of weeks. It involves procurement, imaging, testing, certification and delivery of the device to remote workers. Baker estimates that MokaFive could cut administration and manpower time by 30% to 60% depending on the scope of the company’s network.
MokaFive also requires less of a monetary investment than rival VDI solutions and doesn’t require IT administrators to learn a new skill set, claims MokaFive VP of marketing, Purnima Padmanabhan.
“VDI does enable companies to ramp up and quickly provision and de-provision virtual machines (VMs); however, the IT department is still required to build out fixed server capacity for its transient workforce,” Padmanabhan says. Oftentimes, the additional capacity ends up going to waste. “The whole point of contractors is to dial in, dial up and dial down expenses, and that’s what MokaFive does,” she adds.
Steve Sommer, president of SLS Consulting in Westchester, New York agrees. Sommer spent 25 years simultaneously holding the positions of CIO and CTO at Hughes, Hubbard & Reed a NYC law firm with 1,200 end users – including 300 attorneys — in a dozen remote locations. Sommer observes that corporate politics frequently determine access policy at the expense of security. “A company’s knowledge workers – lawyers, doctors, software developers – who drive large portions of revenue will demand all-access, all the time and security be damned. In the past it was an either/or proposition,” Sommer says.
With the MokaFive desktop-as-a-service approach all the data is encapsulated, encrypted and controlled. Organizations now have the option to manage the permanent workforce as well as temporary contractors and consultants who use their own personal devices quickly and easily. IT managers can provision a virtual machine (VM) on top of MokaFive or give the remote user or contract worker an HTML link which contains the MokaFive LivePC. The end user clicks on the link to get a completely encapsulated VM environment, which is controlled through policies using MokaFive. It can be completely encrypted at the 256-bit AES encryption. The entire environment is managed, contained and is kept updated with the latest passwords, connections, application versions and patches. When the user or contractor worker leaves the company, the IT department issues a root kill signal and all the licenses are retrieved and called back, ensuring compliance.
“MokaFive is a boon for IT departments and end users alike; no more worrying about provisioning and version. I love the fact that it’s application, hardware and operating system agnostic,” Sommer says. “And it also has distinct time saving benefits for the end user, or transient workforce. They can take their work with them wherever they are and they don’t have to worry about borrowing a notebook or PDA and ensuring that it’s properly configured with the correct version.”
MokaFive already has several dozen customers and prospects and is gaining traction in a number of vertical markets including financial services, legal, healthcare, government and education. Given the burgeoning popularity and mainstream adoption of VDI, the MokaFive Suite represents a viable alternative to organizations that want a fast, cost effective and non-disruptive solution that lets IT departments give fast, efficient and secure network access. It’s definitely worth exploring and MokaFive offers free trials for interested parties from its website.

Networks Without Borders Raise Security, Management Issues Read More »

Scroll to Top