In the beginning, public cloud was the only choice. If you had an existing environment on-premise, colocated or with another web hosting company, you couldn’t connect it up to the public cloud. You could set up your own site-to-site VPN across the internet, but this had its challenges and limitations.
Indeed, this was all part of the cloud provider’s strategy — you had to go all in. Everything should be deployed into the provider’s public cloud and nowhere else. It made sense when cloud providers were mostly focusing on new projects and new applications, but it has proven a big challenge to migrate existing workloads or run systems in parallel.
So we started to see features released designed to connect into existing environments. These have ranged from storage appliances that cache files locally to official site-to-site VPN functionality and direct connect. These have all been advertised as helping you connect existing environments to take advantage of public cloud, and it’s true that they do help with this. However, I don’t think that’s the end goal.
There are many cases where it’s an advantage to run your own environment and hook into public cloud for burst capabilities and flexibility. A site-to-site VPN allows you to do this, as does direct connect. However, while these types of features help the cloud providers in the short term, they mean that cloud resources merely complement existing environments for very specific use cases like bursting.
Examining the product portfolios of the big cloud providers (Amazon, Google and Microsoft) suggests that this isn’t the true goal. It’s very clear that these providers are competing to run your entire workload, from email to file storage to compute to data processing — they don’t just want to act as capacity insurance. The investment in the underlying core components has allowed these supporting services to be built, and it all contributes to convincing you to move ever more workloads into the public cloud.
Strange bedfellows make sense
Partnerships such as Google teaming up with VMware seem strange on the surface. Using vCloud to tap into Google Cloud Services seems like Google’s recognition of the fact that private deployments are here to stay. In reality, it has to be viewed as part of a long game to make existing on-site users comfortable with public cloud resources, with the end goal of moving more and more workloads into the public cloud.
Getting existing deployments to accept some public cloud components is a clever way to get into an existing environment, especially in large enterprises that are already familiar with the likes of VMware. For managers and CIOs, everything appears within existing frameworks, with support, SLAs and systems they recognize. For developers and sysadmins, the familiarization strategy is the same, which is why so much effort is being put into supporting common tools. If you can use Kubernetes to manage your existing private environments, you’ll be comfortable using it to manage public cloud resources too.
Docker here, Docker there, Docker everywhere
Why do you think there’s so much support for Docker, and containers generally? It’s certainly an interesting technology, but why is every cloud provider putting in so much effort to rapidly develop specialized services to support deployment and management of containers? Because the format makes it incredibly easy to deploy anywhere — it makes your applications completely portable, so applications can be moved to the public cloud much more easily than if they were VMs. The fact that it makes it easy to move between cloud providers is just a side effect — the biggest hurdle is getting into the public cloud in the first place. Once it’s there, each vendor can battle it out to compete for the workloads.
The strategy is transition. These features are transitory with all eyes focused on the prize of every workload deployed in the public cloud. There will always be hybrid deployments, especially in those cases where it makes more sense to run your own environment, but the goal for the vast majority is not just public cloud first, but only public cloud. This is what the cloud providers are ultimately trying to achieve. Don’t let their partner announcements and discussions around hybrid clouds fool you.
David Mytton is the founder of Server Density, a SaaS tool which helps you provision and monitor your infrastructure. Based in the U.K., David has been programming in Python and PHP for over 10 years, was one of the earliest production MongoDB users (founding the London MongoDB User Group) and was one of the founding members of the Open Rights Group.
Public cloud providers’ end game shouldn’t surprise anyone originally published by Gigaom, © copyright 2015.
Related research and analysis from Gigaom Research:
Subscriber content. Sign up for a free trial.