Cloud Computing: Just When You Thought It Was Safe

July 1, 2011 Off By David
Object Storage
Grazed from Wall Street & Technology.  Author: Larry Tabb.

Amazon crashes and clouds burn. On April 21, Amazon.com had problems with one or more of its shared-services data centers — in other words, its cloud was grounded. And this wasn’t the first time.

While we don’t know the full extent of the damage caused by Amazon’s cloud failure, for the firms that rely on the Amazon infrastructure, it wasn’t pretty — applications, systems and websites went down, leading to frustrated users, angry customers and missed sales. That said, shared services — that is, clouds — are the future of computing. For years enterprise software has been in decline. Firms have been thinking about how to transition from proprietary to open; from single to multitenant; and from individually deployed to the cloud, where not only the data is virtualized, but the processing is as well. This was the idea with grid and utility computing, and now the cloud.

The problem is, while shared infrastructure is the future, systems still go down. That is why most Wall Street firms are not putting mission-critical services in shared clouds; rather, they are leveraging private cloud infrastructures for their too-important-to-fail applications. But this will change.

When I first started writing about grid technology, compute-intensive applications were beginning to share processing power within a single data center. Then, as the capabilities of processors improved, our grids became data-constrained — the ability to compute was compromised by the ability to move data into and out of the database. From compute grids we moved to data grids, where the database became virtual and we leveraged distributed memory, only committing to the database when transactions needed to be permanently recorded.

Now we are at that next level, where compute and data virtualization are beginning to allow applications to be virtualized — where various applications, objects, services and data can be automatically sized, moved and decommissioned depending upon need without impacting the user experience. Currently, this works much better for stateless computing, such as web servers, data distribution and non-transactional tasks. Once you start processing transactions — especially financial transactions — the need to manage where the transaction is at any given point in the process becomes critical; paying twice for the same thing or executing a trade multiple times can quickly impact service levels, deteriorate brand quality and diminish financials.

But things are getting better. We are seeing more and more transactional services being offered in a cloud. Even mission-critical (private) cloud solutions are available, including execution and order management systems.

The scalability, ease of provisioning, simplicity of deployment, facility of maintenance and consistency of a single code base afforded by the cloud, as well as the reduced costs of a services-based model, make the cloud a very compelling model. The challenge is to keep it running. While it might be OK for Amazon.com services to be unavailable for a day, a bulge-bracket bank doesn’t want that type of variability with its processing infrastructure. So for now, while it’s "to the cloud" for non-transactional computational infrastructure, it’s still "to the private cloud" for transactions more dear.