2009年7月22日水曜日

4 1/2 Ways to Deal With Data During Cloudbursts



4 1/2 Ways to Deal With Data During Cloudbursts

Cloudbursting is an approach to handling spikes in demand that overwhelm enterprise computing resources by acquiring additional resources from a cloud services provider. It’s a little like having unexpected houseguests and not enough beds for them to sleep in; some of them will have to be put up in a hotel. While such “peaking through the clouds” promises to maximize agility while minimizing cost, there's the nagging question of what exactly to do about the data such distributed applications require or generate. There are several strategies for dealing with cloudbursts, each of which have different implications for cost, performance, and architecture. One of them may fit both your application's unique requirements and your enterprise's overall business model.

1) Independent Clusters: In this scenario strategy, there are minimal communication and data-sharing requirements between the application instances running in the enterprise and cloud data centers. Global load balancers direct requests to either location, but the application instances running in the cloud do not need to communicate (much) with the ones in the enterprise data center. Since these load balancers are probably already in place, there is no significant marginal cost of infrastructure to enable cloudbursting, just a requirement to keep contextual information such as resource allocation current. Applications that involve data coming to and from users that doesn’t need to be saved between sessions — such as generating downloadable videos from uploaded photos — may not require much of a connection between the enterprise and the cloud.

-1

This architecture provides the best economics, but doesn't cover all situations, since there may be data in the enterprise data center that needs to be accessed by the cloud-resident application, or new data may be acquired or produced as the cloud-based instances run, which must then be consolidated with what’s in the enterprise data center.

2) Remote Access to Consolidated Data: The easiest approach to access and update enterprise data may be for application instances running in the cloud to access a single-instance data store. The viability of this approach depends on the pattern and intensity of reads and writes from the cloud data center to the enterprise and the bandwidth, latency and protocol support of the data networking or storage networking approach used to connect the cloud app to the enterprise-based data — whether it be block-oriented, network-attached, content-addressed or simply a database server.

-1

3) On-Demand Data Placement: Placing cloud data centers on a global network backbone can enhance performance and latency, but if I/O intensity and/or network latency are too high for remote access, then any needed data that isn't already in the cloud must be placed there at the beginning of the cloudburst, and any changes must be consolidated in the enterprise store at the end of the cloudburst. The question is: "How much data needs to get where, and how quickly?"

A large data set may required, either because all the data is needed for computation (such as with seismic or protein-folding analysis), or because the pattern of reads is unpredictable and as such needs to be present "just in case." If so, even with fast file transfer techniques, either there will be delays to begin cloudbursting (from trying to pass a lot of data through a small pipe or by using physical disk delivery); or a large bandwidth pipe must be pre-positioned to quickly migrate the data, impacting cost; or the industry will need to move to more of an on-demand, pay-per-use approach for network capacity. Although progress is being made in this last model, the traditional challenge has been the high capital cost of high-bandwidth, last-mile access, which may involve involve digging trenches, laying fiber, deploying optical transport equipment, and paying for rights of way and rights of entry, that can run into the millions of dollars.

-1

4) Pre-positioned Data Placement: Pre-positioning the data in the cloud to support application/server cloudbursting can be effective from a performance perspective, but adds additional cost as a full secondary storage environment and a metro or wide-area network must be deployed. This impacts the breakeven point for cloudbursting, because now a simple rule like the utility premium must be lower than the peak-to-average ratio no longer holds: additional fixed costs shift the breakeven point.

-1

4.5) BC/DR Plus Cloudbursting: However, if the cloud location doubles as the data mirroring or replication site for Business Continuity/Disaster Recovery, then support for cloudbursting can come along for free. However, this may imply a requirement for bi-directional primary/secondary volumes, for example data written at the enterprise site is replicated to the cloud while data written in the cloud is replicated to the enterprise. And the primary/secondary volume designation must be fungible, or some sort of distributed data management and possibly distributed record locking strategy must be implemented. Technology to do this is still evolving.

Such an approach also changes the dynamics and business models associated with cloud environments for the enterprise. For example, a number of cloud service providers have been building mega-data centers. However, if cloud data centers must be within BC/DR distance of enterprise data centers, then a greater number of smaller cloud centers may be preferable to fewer larger ones. This in turn impacts statistical multiplexing effects such as peak smoothing and utilization maximization, limiting advantages such as preferential access to cheap power and cooling, etc. If not, longer distances must be traversed, potentially impacting application performance for highly transactional environments. Additional variations and combinations of the above are possible as well. For example, a master image of the application may be frozen and replicated to the cloud as data via a virtualization layer/container.

In any event, understanding storage options is key for two reasons. First, the business case for cloudbursting may change. Saving a couple of bucks on virtual server hours looks less attractive if wide-area storage networking and costly enterprise arrays are required. On the other hand, an optimal architecture can kill multiple birds with one stone — those of agility, business continuity and cost-minimization — while meeting transaction throughput and latency requirements through distributed, on-net processing. Secondly, different scenarios require different network interconnects between the enterprise and the cloud. The Internet alone may be fine if clusters are independent, but for most scenarios, other mechanisms such as VPNs or optical transport are likely to be required.

Joe Weinman is Strategy and Business Development VP for AT&T Business Solutions.


The future of mobile: GigaOM Pro provides insider perspectives and analysis on the trends defining tomorrow's mobile market. Learn more »

My Clippings / Sun, 19 Jul 2009 13:00:54 GMT

Sent from FeedDemon