2008年9月10日水曜日

Internal Cloud Computingの考察

Cloud Computingがいづれは企業内に浸透していき、Amazon等が提供している外部サービスとしてのCloud Computingと全く同じシステム環境とサービスを企業内のデータセンターで実現するケースが増える、と予測した記事。  内容は非常に興味深く、過去Internetが登場したときに、Intranetの概念そして技術が現れて企業内に浸透していったのと全く同じ状況が見えている。 
Internal Cloudとこの記事で呼ばれているアーキテクチャは、次の3つの用途が考えられる、と説明されている。
  • Creating a single-service utility:
    企業内のアプリケーションサービスがそのまま一つのCloudになる。
  • Power-managing servers:
    企業内の汎用的ナコンピュータリソースをUtilityのようにダイナミックに企業内ユーザに提供できる環境
  • Using utility computing management/automation to govern virtualized environments:
    アプリケーションの管理、社内ユーザへのProvisionも含めて、Utility的な感覚でCloud Computing環境化する
  •  


    Creating a Generic (Internal) Cloud Architecture

    I've been taken aback lately by the tacit assumption that cloud-like (IaaS and PaaS) services have to be provided by folks like Amazon, Terremark and others. It's as if these providers do some black magic that enterprises can't touch or replicate.

    However, history's taught the IT industry that what starts in the external domain eventually makes its way into the enterprise, and vice-versa. Consider Google beginning with internet search, and later offering an enterprise search appliance. Then, there's the reverse: An application, say a CRM system, leaves the enterprise to be hosted externally as SaaS, such as SalesForce.com. But even in this case, the first example then recurs -- as SalesForce.com begins providing internal Salesforce.com appliances back to its large enterprise customers!

    I am simply trying to challenge the belief that cloud-like architectures have to remain external to the enterprise. They don't. I believe it's inevitable that they will soon find their way into the enterprise, and become a revolutionary paradigm of how *internal* IT infrastructure is operated and managed.

    With each IT management conversation I've had, the concept that I recently put forward is becoming clearer and more inevitable. That an "internal cloud" (call it a cloud architecture or utility computing) will penetrate enterprise datacenters.

    Limitations of "external" cloud computing architectures

    Already, a number of authorities have pretty clearly outlined the pros and cons of using external service providers as "cloud" providers. For reference, there is the excellent "10 reasons enterprises aren't ready to trust the cloud" by Stacey Higginbotham of GigaOM, as well as a piece by Mike Walker of MSDN regarding "Challenges of moving to the cloud". So it stands that innovation will work around these limitations, borrowing from the positive aspects of external service providers, omitting the negatives, and offering the result to IT Ops.

    Is an "internal" cloud architecture possible and repeatable?

    So here is my main thesis: that there are software IT management products available today (and more to come) that will operate *existing* infrastructure in a manner identical to the operation of IaaS and PaaS. Let me say that again -- you don't have to outsource to an "external" cloud provider as long as you already own legacy infrastructure that can be re-purposed for this new architecture.

    This statement -- and associated enabling software technologies -- is beginning to spell the beginning of the final commoditization of compute hardware. (BTW, I find it amazing that some vendors continue to tout that their hardware is optimized for cloud computing. That is a real oxymoron)

    As time passes, cloud-computing infrastructures (ok, Utility Computing architectures if you must) coupled with the trend toward architecture standardization, will continue to push the importance of specialized HW out of the picture.
    Hardware margins will continue to be squeezed. (BTW, you can read about the "cheap revolution" in Forbes, featuring our CEO Bill Coleman).

    As the VINF blog also observed, regarding cloud-based architectures:
    You can build your own cloud, and be choosy about what you give to others. Building your own cloud makes a lot of sense, it's not always cheap but its the kind of thing you can scale up (or down..) with a bit of up-front investment, in this article I'll look at some of the practical; and more infrastructure focused ways in which you can do so.

    Your "cloud platform" is essentially an internal shared services system where you can actually and practically implement a "platform" team that operates and capacity plans for the cloud platform; they manage its availability and maintenance day-day and expansion/contraction.
    Even back in February, Mike Nygard observed reasons and benefits for this trend:
    Why should a company build its own cloud, instead of going to one of the providers?

    On the positive side, an IT manager running a cloud can finally do real chargebacks to the business units that drive demand. Some do today, but on a larger-grained level... whole servers. With a private cloud, the IT manager could charge by the compute-hour, or by the megabit of bandwidth. He could charge for storage by the gigabyte, and with tiered rates for different availability/continuity guarantees. Even better, he could allow the business units to do the kind of self-service that I can do today with a credit card and The Planet. (OK, The Planet isn't a cloud provider, but I bet they're thinking about it. Plus, I like them.)
    We are seeing the beginning of an inflection point in the way IT is managed, brought on by (1) the interest (though not yet adoption) of cloud architectures, (2) the increasing willingness to accept shared IT assets (thanks to VMware and others), and (3) the budding availability of software that allows "cloud-like" operation of existing infrastructure, but in a whole new way.

    How might these "internal clouds" first be used?

    Let's be real: there are precious few green-field opportunities where enterprises will simply decide to change their entire IT architecture and operations into this "internal cloud" -- i.e. implement a Utility Computing model out-of-the-gate. But there are some interesting starting points that are beginning to emerge:
    • Creating a single-service utility: by this mean that an entire service tier (such as a web farm, application server farm, etc.) moves to being managed in a "cloud" infrastructure, where resources ebb-and-flow as needed by user demand.
    • Power-managing servers: using utility computing IT management automation to control power states of machines that are temporarily idle, but NOT actually dynamically provisioning software onto servers. Firms are getting used to the idea of using policy-governed control to save on IT power consumption as they get comfortable with utility-computing principles. They can then selectively activate the dynamic provisioning features as they see fit.
    • Using utility computing management/automation to govern virtualized environments: it's clear that once firms virtualize/consolidate, they later realize that there are more objects to manage (virtual sprawl) , rather than fewer; plus, they've created "virtual silos", distinct from the non-virtualized infrastructure they own. Firms will migrate toward an automated management approach to virtualization where -- on the fly -- applications are virtualized, hosts are created, apps are deployed/scaled, failed hosts are automatically re-created, etc. etc. Essentially a services cloud.
    It is inevitable that the simplicity, economics, and scalability of externally-provided "clouds" will make their way into the enterprise. The question isn't if, but when.