Keeping Your Workloads Portable across Clouds

Grazed from Information Week.  Author: Gordon Haff.

A hybrid cloud offers the promise of uniting resources across multiple cloud providers, including those running different virtualization technology stacks. Creating a resource pool that cuts across silos increases efficiency and flexibility relative to constraining a workload to some subset of the overall capacity...

Furthermore, a hybrid cloud adds a layer of abstraction to the underlying technology platforms. Doing so can simplify management operations by masking the operational differences between different clouds or virtualization technologies.



A cloud needs to embody several characteristics in order to fulfill the above promise.


1. It needs to be able to confirm the suitability of the various possible targets. In short, it needs to check for portability. Can the workload be started on the target virtualization and cloud environment(s)?


2. It needs to be able to move, copy, and share workloads between cloud environments. This involves defining (or importing) a workload and preparing the workload to run within each target environment.


3. Finally, it needs to provide common and portable operations across cloud providers so that workloads can be deployed and managed in a consistent way regardless of where they are running-which should be transparent from the perspective of the user.


Red Hat's CloudForms Infrastructure-as-a-Service (IaaS) product, now in beta, enables portability and interoperability in several ways.


For one thing, CloudForms operates independently of the underlying technology stack. It doesn't depend on features embedded in any underlying technology platforms, such as the hypervisor. This allows it to span, for example, virtualization software from multiple vendors and public cloud providers running a variety of technology stacks.


To communicate with clouds that use different application programming interfaces (APIs) and different ways of specifying resources, CloudForms leverages Deltacloud, an incubator project within the Apache Software Foundation.


Deltacloud abstracts the models and methods used by different public and private cloud providers to a few fundamental approaches. In essence, it understands how a given cloud performs a given function such as authentication and what resources a "small" VM instance, for example, includes on a given cloud. Defined hardware profiles which specify the amount of CPU power, storage, and so forth for an instance can be mapped as closely as possible to the options offered by a given cloud.


Deltacloud is written in Ruby, but all communications from clients are handled through a REST interface, a widely-used style of lightweight client/server interaction. Deltacloud can then interface to existing cloud APIs through a modular chunk of code called a driver. Thus, different clients can support different computer languages and different drivers can support different target clouds without affecting the Deltacloud core. Deltacloud can also support clouds directly with native code.


Given how central concepts like abstraction and workload mobility are to cloud computing, we believe that interoperability between clouds will only become more important. In fact, it's not too strong to say that clouds that don't embody characteristics like portability and interoperability are not going to deliver on the promise of cloud computing that has the industry so abuzz.