Misunderstanding the operational and cost implications of cloud deployments can lead to failed IT projects and incur substantial financial and opportunity costs.
The virtualized cloud is a tempting proposition for many businesses. Promises of lower cost and more flexible alignment of infrastructure with business needs leads many business decision makers to consider cloud migration a no-brainer: a perspective encouraged by cloud vendors, who see their tools as the solution to almost every problem.
But many businesses have found that the cloud, as it exists today, is not the best solution. That’s partially the fault of cloud vendors, and partially the fault of businesses themselves — they take cloud marketing at face value and fail to seriously consider if their use cases are the right application of cloud technology.
The result is that many attempted cloud deployments fail, and IT deployment failures have both financial and opportunity costs. According to a recent study from McKinsey and Company, 17% of IT projects fail so spectacularly that they threaten the continued existence of the company. And, more to the point, according to The Whir, 63% of cloud deployments fail when they are first launched.
This isn’t the fault of cloud and virtualization technology: it really can deliver lower costs, massive scalability, and greater agility. But the cloud is not a panacea to all IT problems. It works brilliantly in certain limited situations, and when it’s misapplied, the costs can be serious. Sometimes — most of the time — you need a truck rather than a sports car.
One of the most serious mistakes businesses make is to choose virtualized public cloud solutions because of the promised cost benefits. Yes, the cloud can sometimes — although not always by any means — save money in the short term, but focusing on simple costing is an error. For one, the cloud is a complex environment and unless you have in-house expertise or are spending millions to get vendor support, that complexity is the major cause of failed IT projects. Promises of spinning up servers in a few seconds are all well and good, but tying those servers into a secure network and managing integrations with existing systems can be a huge time sink.
Relentless focus on cost optimization is a mistake. If all the technological options are equal, and one costs less than another, then choosing the least expensive makes sense. But that’s not the world we live in. We live in a world that offers a mix of different technologies with unique strengths and weaknesses. Choosing the right one means considering cost as part of range of factors that include appropriateness to business operations, complexity, available expertise, potential performance, and availability — in short, choosing the technology that’s right for the business.
Sometimes virtualized clouds are the right solution — they have made possible business models that would have been impossible a decade ago. But for many cases, especially those that depend on long-term reliability, stability, predictable scaling, and performance maximization, they are a limiting solution.
It’s not my intention to bash the virtualized cloud, although in many cases I see containers and bare metal as a more appropriate solution. My intention is to encourage business owners and IT decision makers to consider the whole range of potential options: sometimes public clouds will the the right answer, but often there are better options — some more traditional than public clouds, such as dedicated servers and bare metal clouds, and some more modern, like containers.