The appeal of cloud bursting is easy to see, but the level of complexity involved negates its benefits – at least for now. Most IT departments face the occasional spike in computing demand. Sometimes they are predictable (a sports betting site knows what to expect when the cup final is on, for example), and sometimes not.
Buying enough equipment to handle those demand peaks internally isn’t always cost-effective, and one oft-touted solution to this is cloud bursting. In this scenario, applications can automatically offload work to a public cloud service provider when their on-premise resources get stretched.
The appeal of this approach is not difficult to see, as it means enterprises are not footing the bill for surplus, on-premise computing power during periods of normal operation.
This emerged as a key marketing tactic a few years ago for suppliers intent on selling the concept of hybrid cloud to the enterprise.
A leading IT firm, backs this view and says the time it takes to send a query to the cloud and back again can rule out the use of cloud bursting for applications built to expect local network response times.
To give remote applications fast access to data, IT teams may need to replicate entire datasets there. Then, if the data is used for real-time operations, they face the challenge of constantly updating the remote copy of the data to keep it concurrent with what they have locally.
The problems continue to mount when attempting to cloud burst applications at the data layer. If the application is a relational database that relies on atomicity, consistency, isolation and durability (Acid) in transactions, cloud bursting gets even hairier. The remote application’s transactions must stay in sync with the on-premise one while avoiding any inconsistencies in the transaction processing.
None of this makes cloud bursting applications impossible, but it does make it far more complex unless the functionality has been designed in at the application layer.