I started writing about digital transformation and cloud switching costs six years ago. Cloud services have matured a lot since then. IT customers have started looking at multi-cloud strategies to manage cloud costs. But are enterprise IT customers prepared for a multi-cloud world? Is multi-cloud really even a thing?
Our benchmark for utility services are modern electrical and water utilities. Both electricity and water need a physical connection to a customer location. Large water mains and high voltage electrical lines bring water and power close to the customer. A “last mile” service spans a short distance between the high-volume utility and the customer’s property. For water and electrical utilities, the last mile is typically a local service run by a municipality or utility district.
For enterprise cloud services, an Internet service provider (ISP) acts as the last mile between the open internet and a customer location. The big difference between cloud and physical utilities is simple:
- Physical utilities are identical at an on-premises (“on-prem”) demarcation point.
- Cloud services are identical at a specific cloud service provider (CSP).
An electrical or water utility typically doesn’t supply any onsite infrastructure. It delivers a utility to a demarcation point at a customer’s location. There are many building codes and standards in place to ensure that customers can take a utility supply from an on-prem demarcation point and deliver it locally for specific applications.
A customer still must install and maintain plumbing and electrical infrastructure to meet specific application needs. Electricity needs step-down transformers, circuit breakers, power conditioners, fault detectors, wiring, and outlets. Water needs hot, cold, grey water, and waste water lines, with flow sensors, cut-off valves, filters, spigots, outflows, and drains.
So far, enterprise IT is no different than a physical utility. On-prem IT infrastructure lets local applications and end-users access remote Internet and cloud services.
However, CSP services are not interchangeable. Or at least they are not interchangeable today. Today, customers porting in-house applications to “the cloud” must aim at specific CSPs. Customers must configure CSP specific compute and storage instances with CSP specific networking options. Developers must write specific code for each CSP: Alibaba Cloud, Amazon Web Services (AWS), Google Cloud Platform (GCP), IBM Cloud, Microsoft Azure, etc.
To enable a multi-cloud strategy, a customer must port each application to multiple clouds. Depending on the application, porting to multiple clouds’ native infrastructure-as-a-service (IaaS) may be so expensive that it negates any cost savings a customer might expect from pricing arbitrage between CSPs. It may be cheaper to stick with one port of an application to one CSP, even if that CSP is more expensive than alternatives.
That’s where virtualization helps. Virtualization is a near-term, quick and easy “lift and shift” approach to application migration to the cloud. Developers port the operating system (OS) an application runs on to a hypervisor. Or developers port their application to an OS that already runs on a hypervisor.
Hypervisors need few changes to application code. However, hypervisor image management and network configuration can be complex and hard to manage across multiple CSPs. Image management includes selecting, validating, and updating the guest OS required to run an application in a virtual image. And the OS needs to know about hardware underneath the hypervisor, which comes with its own complexities.
Bare-metal hypervisors, such as Microsoft Hyper-V, VMware ESXi, or open source KVM, are easier to manage and scale in public cloud environments than other types of hypervisor. If enterprise IT customers are not willing to spend a lot of time and money re-validating applications, bare-metal hypervisors will get close to a utility model of computing, but not quite there.
Containers offer a way out of complex image management and hardware dependence. Containers offer the same virtualization services as an OS running in a hypervisor, but the virtualization environment and OS features are bundled together. When a developer writes code to run in a container, it will always run in that container, no matter what hardware the container runs on and regardless of any updates made to the container code. Open source Docker is the most popular container, but there are many others, including Linux-native LXC, Canonical’s LXD, and CoreOS’s rkt (pronounced “rocket”).
Applications typically need to be redesigned (“refactored”), rewritten, and then revalidated to take full advantage of containers (“containerization”). Containerization is obviously a major investment in software development.
Containerized applications will run on any CSP’s compute instances that support that type of container. There may be some small orchestration (scheduling) differences between clouds. But, multi-cloud orchestration is a much easier challenge to solve than verifying that a virtualized application will run reliably on multiple clouds.
Containerization can bring enterprise IT customers very close to a utility model of multi-cloud computing. It is probably worth refactoring an application if the application will be deployed for at least another decade or two.