2009年3月31日火曜日

OnLive: The End Of Games Platforms?

OnLive呼ばれるオンラインゲーム専用のCloud Computingプラットホームが発表された。 
従来高速画像処理ができるグラフィックカードが必要だった3Dゲームは、同サーバ上で稼動する事により画像処理をすべてサーバ上で行い、クライアント端末は安価なPCでもゲームを楽しめる、というアーキテクチャ。 

OnLive: The End Of Games Platforms?

Written by Jim Rossignol on March 24, 2009 at 12:19 pm.


At the last GDC the industry big brains were sat around telling us how games would one day be remotely rendered on big computing clusters and then streamed to our TVs. The big unveil at this year's GDC has proved them to be correct. Maybe. OnLive is a service on which you use superfast broadband (1.5mbps minimum) to play games on a remote server. You just plug it in to any "entry level" PC or Mac, or hook it up to your TV, and play. It doesn't matter if you don't have the latest 3D card: because the remote server does the rendering and streams the result to you. That's the theory anyway, and it's a theory a bunch of big name publishers have signed up to. Watch the OnLive spokesman Steve Perlman make his big claims after the jump.


 

__________________

Microsoft: No on-premise Azure hosting for business users

Microsoft のAzureプラットホームのホスティングはMicrosoft社のデータセンタでのみ行われ、エンタプライズのデータセンタ上でPrivate Cloudとして提供される事は無い、と同社が明確にした。
 
エンタプライズはHyper-VとWindows Server Datacenter Editionをライセンスする事によって同等の機能を実現できる、との事。 

Microsoft: No on-premise Azure hosting for business users

Will Microsoft allow enterprises to do on-premise hosting based on its cloud computing Azure platform? The latest — and perhaps final — answer is no.

Since Microsoft first rolled out its Azure cloud platform last fall, I've seen conflicting reports about whether or not the Redmondians will provide business users with some way to do private/on-premise cloud computing via Azure (i.e., host the Azure operating system and/or Azure services themselves in their own datacenters, instead of in Microsoft's Quincy, Wash., and/or San Antonio, Texas, ones).

But based on a related e-mail exchange I had recently with Julius Sinkevicius, Director of Product Management for Windows Server, however, I believe Microsoft has no intentions of allowing users to create private, Azure-based clouds. I was asking Sinkevicius for some clarification around Microsoft's recent announcement with Cisco, via which Cisco will offer Windows Server and Hyper-V to Cisco customers who purchase its recently unveiled Unified Computing System blade servers.

Here are a couple of the relevant Q's and A's between  Sinkevicius and me.

MJF: Did Cisco ask Microsoft about licensing Azure? Will Microsoft license all of the components of Azure to any other company?

Sinkevicius: No, Microsoft is not offering Windows Azure for on premise deployment. Windows Azure runs only in Microsoft datacenters. Enterprise customers who wish to deploy a highly scalable and flexible OS in their datacenter should leverage Hyper-V and license Windows Server Datacenter Edition, which has unlimited virtualization rights, and System Center for management.

MJF: What does Microsoft see as the difference between Red Dog (Windows Azure) and the OS stack that Cisco announced?

Sinkevicius: Windows Azure is Microsoft's runtime designed specifically for the Microsoft datacenter. Windows Azure is designed for new applications and allows ISVs and Enterprises to get geo-scale without geo-cost.  The OS stack that Cisco announced is for customers who wish to deploy on-premise servers, and thus leverages Windows Server Datacenter and System Center.

The source of the on-premise Azure hosting confusion appears to be this: All apps developed for Azure will be able to run on Windows Server, according to the Softies. However — at present — the inverse is not true: Existing Windows Server apps ultimately may be able to run on Azure. For now only some can do so, and only with a fairly substantial amount of tweaking.

Microsoft's cloud pitch to enterprises who are skittish about putting their data in the Microsoft basket isn't "We'll let you host your own data using our cloud platform." Instead, it's more like: "You can take some/all of your data out of our datacenters and run it on-premise if/when you want — and you can do the reverse and put some/all of your data in our cloud if you so desire."

Will Microsoft's data-portability promise be enough to get nervous enterprise users to give Microsoft's Azure platform a chance?

2009年3月18日水曜日

What Does PCI Compliance in the Cloud Really Mean?

Cloud Computing環境でPCI準拠(クレジットカード業界の規約)について、システムとしてどのような要件を満たしているべきかをMosso/Rackspaceが説明している。 図はPCIで定義されている顧客情報がCloud上に記録されていない状況でシステムが構築されていることを示している。

What Does PCI Compliance in the Cloud Really Mean?

Mosso/Rackspace recently announced they have "PCI enabled" a Cloud Sites customer that needed to accept online credit card payments in return for goods (i.e. a merchant).

However, the website hosted on Mosso's Cloud, doesn't actually receive, store, process, transmit any data that falls under the requirements of PCI.

Or to put it another way, its 'compliance' through not actually needing to be…

This didn't deter them from putting a "PCI How To" document together which starts as follows (emphasis mine):

Building a PCI Compliant e-Commerce Solution Using Cloud Sites

Cloud Sites is designed to provide an elastic web hosting environment.  This capability can allow an e-commerce merchant to properly handle the high volume shopping season without carrying extra infrastructure throughout the remainder of the year.  Cloud Sites is not currently designed for the storage or archival of credit card information.  In order to build a PCI compliant e-commerce solution, Cloud Sites needs to be paired up with a payment gateway partner.

They then include the following helpful graphic which I modified to emphasis where the PCI data is NOT received, stored, processed or transmitted.  Everything to the left of the red line is the Mosso Cloud and everything to the right is the Payment Gateway provider.  The middle bit marked 'API' is that of the Payment Gateway as called by the merchant.

No PCI data at Mosso

As they go on to state:

The communication from the Card Processing System to the Web Front End can never contain cardholder data.  Cardholder data includes: primary account number, expiration date, name as it appears on the card, CVV, CVV2 and magnetic stripe.

Yes Cloud Ladies and Gentlemen, this is an implementation of an age-old Internet architecture that involves redirecting customers wishing to pay for the contents of their online basket to an approved and compliant online payment gateway.

This approach follows the advice that RackSpace gives with regard to their dedicated hosting business (non-Cloud):

If you deal with credit cards and are required to meet the PCI DSS, my advice is to find a way to limit the scope of your compliance as much as possible. Rackspace recently concluded a two-year effort to receive our PCI Service Provider Report on Compliance (ROC) as a Compliant Level 1 Service Provider from Visa USA.

Just to be really clear, the PCI certification referred to above is of their dedicated hosting business - not their Cloud (aka Mosso business).  Different technologies and different architectures.

So, is there any PCI angle to this in reality?

The document talks to the PCI requirement as follows (emphasis mine):

By designing your e-commerce site in this manner, PCI compliance is reduced to a Type A SAQ (Self Assessment Questionnaire) for merchants processing less than 6,000,000 annual transactions.  The current version of the Type A SAQ can be obtained at: https://www.pcisecuritystandards.org/saq/instructions_dss.shtml. To achieve compliance when all cardholder information is handled by a partner, you only need to address two of the twelve sections of the complete PCI-DSS (Payment Card Industry – Data Security Standard) and only a subset of the controls in each of those sectionsThe two sections are (9) Restrict physical access to cardholder data and (12) Maintain a policy that addresses information security.

The section 9 requirements are designed to protect any cardholder information stored at your office locations. If possible configure the relationship with your payment partner so that it is impossible for you or your employees to obtain complete cardholder information.  When logging into the partner portal you should see at most the last 4 digits of a card number.

The section 12 requirements are designed to ensure you're working with PCI compliant partners to handle the cardholder information for you and that you have a process in place to ensure those partners remain compliant.  VISA publishes a list of compliant service providers on a monthly basis at: http://usa.visa.com/merchants/risk_management/cisp_service_providers.html

If you've followed along this far, you'll realise that Mosso Cloud Sites is still 'out of scope' from PCI requirements as they pertain to the payment process itself, as that is handed off to a 3rd party gateway (the 3rd party must be PCI compliant though).  Section 9 is relevant to the office of the merchant - not the web front end hosting provider (Cloud or not) and section 12 is about your choice of payment gateway, again, nothing to do with Mosso.

Mosso is only relevant when it comes to the PCI requirement that the merchant perimeter is subject to vulnerability scans. In other words, because the merchant has outsourced hosting of an Internet accessible web front-end to Mosso, the merchant website must pass an initial, then four quarterly vulnerability scans to meet the PCI scanning requirement.  But Mosso isn't responsible for running those scans.  Their contribution was to 'partner' with two Approved Scanning Vendors who do the work.

And that brings up two PCI scanning related issues regardless of whether you host on the Cloud or at a traditional hosting provider:

  • vulnerability scans must take place after major network changes
  • some vulnerability checks rely on banner grabbing to determine software version numbers and some providers (like Mosso) backport security fixes resulting in failed checks as version numbers are not incremented.  This is an age-old problem and a limitation of the scanning technology, not the provider.  The Approved Scanning Vendor will need to liaise with the provider/merchant to create manual exceptions.

So what role does Mosso really play when it comes to PCI compliance today?  They permit the Authorized Scanning Vendor to perform scans and confirm software fixes are in place when vulnerability checks generate false positives.

The Takeaway

The fact that Mosso is seeking ways to help their customers off-load as much PCI compliance requirements to other 3rd parties is fine - it makes business sense for them and their merchant customers.  It's their positioning of the effort as a "landmark breakthrough" and that they are somehow pioneers which leads to generalisations rooted in misunderstandings that is the problem.

Next time you hear someone say 'Cloud Provider X is PCI compliant', ask the golden PCI question: is their Cloud receiving, processing, storing or transmitting Credit Card data (as defined by the PCI DSS)?  If they say 'No', you'll know what that really means…marketecture.

Digg Sphinn del.icio.us Facebook Google LinkedIn Live StumbleUpon TwitThis

Cloud Security / Sat, 14 Mar 2009 21:19:40 GMT

Sent from FeedDemon

2009年3月13日金曜日

Cloud Relationship Model

Cloud Computing体系の中でSaaS、PaaS、IaaSがどのように位置づけられるかを上手に整理した図。  今までいろいろと整理している記事があるが、これが一番洗練されていてわかりやすい、と思う。

This article was originally a guest post I did recently for Stewart Townsend over at Sun Startup Essentials describing the cloud relationship model I had developed as an artefact when discussing cloud computing.

I wanted a simply model which I could share with people and use as a discussion point, whilst still capturing the major areas of cloud computing which I considered most pertinent.  I developed this model about six months ago and have since found it useful when talking with people about cloud computing.

Here's the model and I'll go though it's major elements below.


Major Cloud Communities

In the cloud there are three major participants:

  1. the Cloud Providers; building out Clouds, for instance Google, Amazon, etc. Effectively technology providers.
  2. the Cloud Adopters / Developers; those developing services over the Cloud and some becoming the first generation of Cloud ISVs.  I have included Cloud "Service" developers and Cloud ISV developers together. This group are effectively service enablers.
  3. Cloud "End" Users; those using Cloud provisioned services, often without knowing that they are cloud provisioned, the most obvious example of which are the multitude of Facebook users who have no idea there favorite FB app. is running on AWS. These are the service consumers.

I think it's important to talk about these communities because I keep hearing lots about the Cloud Providers, and even more about the issues and 'needs' of the Cloud adopters / developers, but very little in terms of Cloud "End" Users.  In a computing eco-system such as this where "services" are supported by and transverse technology providers, service enablers and service consumers an end to end understanding of how this affects these reliant communities is required. Obvious issues such as SLAs for end users and businesses which rely upon high availability and high uptime from there cloud providers come to mind; however other "ilities" and systemic qualities come to mind such as security, and that's before looking at any detailed breakdown of functional services.

The point here is that the cloud adopters / developers and interestingly the cloud "watchers" (i.e. the press, media, bloggers and experts) would be mindful to remember the needs and requirements of genuine end users; for myself it'd certainly be invigorating to hear more on this topic area.

Billing / Engagement Models

Simon Wardley, a much more eloquent public speaker than myself, does a wonderful pitch which includes a look at the different "as a Service types" which he boils down to being a load of "*aaS" (very amusing, and informative, try and catch Simon presenting if you can).

I wholeheartedly agree that there is a large amount of befuddlement when it comes to the differing "*aaS" types and sub-types, and new ones are springing up relatively frequently, however I also think it's important to not ignore the differences between them.

For me, and many others, I think first popularised by the "Partly Cloudy - Blue-Sky Thinking About Cloud Computing" white paper from the 451 Group, the differing "*aaS" variants are identified as billing and engagement models.  That white paper also postulates the five major Cloud Computing provider models, into which the majority of minor "*aaS" variants fall.  They are:

  1. Managed Service Provision (MSP); not only are you hiring your service from the cloud, you've someone to run and maintain it too.
  2. Software as a Service (SaaS); pretty much ubiquitous as a term and usually typified by Salesforce.com, who are the SaaS poster child.
  3. Platform as a Service (PaaS); the application platform most commonly associated with Amazon Web Services.
  4. Infrastructure as a Service (IaaS);
  5. Hosting 2.0

One of the best breakdowns and visual analysis of this space is the model in Peter Laird's "Understanding the Cloud Computing/SaaS/PaaS markets: a Map of the Players in the Industry" article which is well worth a read.

Major Architectural Layers

Also included in the diagram are the major architectural layers that are included in each of the above billing / engagement models offered by the Cloud providers. They are:

  1. Operations; and this really is operations supporting functional business processes, rather than supporting the technology itself.
  2. Service layer; made up of application code, bespoke code, high-level ISV offerings.
  3. Platform layer; made up of standard platform software i.e. app. servers, DB servers, web servers, etc., and an example implementation would be a LAMP stack.
  4. Infrastructure layer; made up of (i) infrastructure software (i.e.virtualisation and OS software), (ii) the hardware platform and server infrastructure, and (iii) the storage platform.
  5. Network layer; made up of routers, firewalls, gateways, and other network technology.

This rather oversimplifies the architecture, as it's important to note that each of the cloud billing / engagement models use capabilities from each of the above architectural layers; for instance their can be a lot of service simply in managing a network, however these describe the major architectural components (which support the service being procured), not simply ancillary functions, effectively what are the cloud providers customers principally paying for. 

Delta of Effort / Delta of Opportunity

This is much more than the 'gap' between the cloud providers and the cloud users, wherein the cloud adopters / developers sit, the gap between the cloud providers and the end cloud users can be called the delta of effort, but also the delta of opportunity.

It is the delta of effort in terms of skills, abilities, experience and technology that the cloud adopter needs to deliver a functional service to their own "End Users".  This will be potentially a major area of cost to the cloud adopters. But it's also the delta of opportunity;in terms of 'room' to innovate.

The more capability procured from the cloud provider (i.e. higher up the stack as a whole), the less you have to do (and procure) yourself.  However the less procured from the cloud provider the more opportunity you have engineer a differentiating technology stack yourself.  This itself has it's disadvantages because the cloud adopters / developers could potentially not realise the true and best value of their cloud providers infrastructure.

I suspect that there is an optimum level, around the Platform Layer, which abstracts enough complexity away (i.e. you don't have to procure servers, networks, implementation or technology operations staff), but also leaves enough room to innovate and produce software engineered value.  Arguably the only current successful cloud provider, based upon market share, perception, revenue and customer take up, is Amazon Web Services (AWS) who provide a PaaS offering.

Summary

Hope you enjoyed the article, in summary if developing cloud services or even building out a cloud infrastructure I would recommend that you focus on your users and if your a cloud provider, your users' users; remembering that only a certain percentage of those users will be customers (I won't getting into discussing Chris Anderson's 5% recommended conversion rate for the long tail, however I would recommend understanding what some of those calculations might be).

If you're looking to develop services over the cloud, think carefully about where you and your teams skills lie, and where would you most want them focusing there efforts; working on installing and tuning operating systems and application platforms or writing business value focused applications and services, before choosing at which level to engage with your cloud provider(s).  

I haven't mentioned enterprise adoption of cloud based services, and that's because I'd like to post that in the near future in a different article.

Hope you enjoyed the article and all the best,

Wayne Horkan

2009年3月11日水曜日

Cloud Computinga??s Three-Horse Race

Cloud ComputingをIaaS,PaaS,SaaSに分類して、各種代表的なベンダを列挙。

Cloud Computing has hit the main stage, solidly capturing the minds of both the technology and business communities. But while three distinct deployment models have emerged, it's far from certain which of them will go on to prosper. The three models are:

1. Renting raw hardware: compute processing, data storage and networking bandwidth.
2. Leveraging an integrated application development engine.
3. Ordering an application.

So in order to get a better sense of the prospects of each approach, let's take a closer look at key companies promoting them and the market forces shaping them.

Cloud Hardware Rental (Infrastructure as a Service)
Consuming compute and storage services is more involved than the term "rental" may indicate, but this first step transforms and focuses most businesses on their specific value, which isn't necessarily IT infrastructure.

The most well-known purveyor in this category is Amazon, but there are many other players as well, among them GoGrid, Layered Technologies, Joyent and Mosso/Rackspace.

The rental model provides one-to-one replacement of current enterprise infrastructure, except off-site and easily accessible. By choosing CPU type, memory amount and disk configurations of the lowest common denominator, customers of cloud hardware rental know exactly what they receive, can control the infrastructure at a fine-grain level and can easily compare pricing.

But it takes a certain skill set to load, operate and maintain applications in the cloud. Companies like RightScale and Elastra aim to streamline this process. And, unfortunately, Internet outages still occur, as evidenced by the Sprint-Cogent feud in late 2008, so everything may not always be at your fingertips.

Application Development Engines (Platform as a Service)
Sophisticated software offerings now easily tie together underlying hardware, touting "single-click" deployment and scaling of new application instances and machines. These engines also integrate common application components such as a data store, user authentication framework and cache. Some exist only in the cloud; others can be deployed privately.

In this category, Google offers AppEngine, Microsoft offers Azure, and Amazon has a suite of web services on top of basic server rental. Other software platforms that help speed up application development and deployment include 3Tera, Enomaly, and Eucalyptus, although all differ slightly in approach.

Compared with conventional application development, this strategy can slash development time, offer hundreds of readily available tools and services, and quickly scale. But developers still need to learn the platform, and such an approach may not support all programming languages, leading to concern over application portability.

Order an Application (Software as a Service)
Almost all companies use office supplies, but how many companies have their own paper and pulp processing facilities? The same question can be asked about installing, maintaining and operating enterprise applications. For most companies, ordering a cloud-based, software-as-a-service application lets them to get started in minutes, and provides plenty of readily accessible customization.

Saleforce.com is the poster child of this category. But other sizable applications come by way of Workday and Aravo, both of which recently touted their largest customer wins, Flextronics International and General Electric, respectively.

But for companies of a certain size and scale, the licensing model might not fit perfectly, and the quest for specific valued features, reports or integration may only be achievable through in-house deployment. Security has also been a big question around cloud and software-as-a-service applications, but as Alistair Croll points out, those questions deserve reexamination.

Placing Bets
So where are we headed next? Undoubtedly a combination of all three deployment models. Some companies, like data warehousing specialist Vertica, are taking the debate off the table by offering customers software, appliances or cloud services on top of Amazon's infrastructure.

But I believe that for companies providing new cloud offerings, the software-as-a-service or "order and application" approach will prove the most rewarding.

VMware Initiatives Will Help Customers Embrace Cloud Computing - VMware

VMWareのVDC-OSとvCloud 戦略についての発表。
最後にVMWorldでのプレゼンテーションのビデオ収録あり。

Virtual Datacenter and VMware vCloud Initiatives Help Customers Build Private Clouds to Deliver IT and Applications – Both Existing and New Scale-Out Apps as a Service Spanning Internal and External Clouds

CANNES, France, February 24, 2009 — Today at VMworld Europe 2009, Paul Maritz, president and chief executive officer of VMware, Inc. (NYSE: VMW), outlined a comprehensive strategy and technology roadmap that will help enable companies to achieve the benefits of cloud computing internally, and bridge to external clouds through a private cloud.  This strategy is aimed at a more modern approach to delivering IT as a service, achieving the maximum efficiency and flexibility for businesses.  Building on announcements from VMworld Las Vegas 2008, today at the second-annual VMworld Europe 2009 in Cannes, Maritz discussed and demonstrated three key enabling components for building a private cloud: the complete virtualization of the datacenter through a Virtual Datacenter Operating System (VDC-OS), the extensions of the VDC-OS and the management layer to enable service providers to deliver external clouds and federate with internal clouds, and the evolving technologies for desktop virtualization to tie all elements of IT as a service together.

"VMware's focus is on enabling our customers to run their datacenters as internal clouds and operate in a far more flexible and cost-efficient way," said Paul Maritz, president and chief executive officer, VMware. "Our customers want the plumbing to disappear – in the datacenter, on the desktop and in the cloud – so they can focus their staff time and IT budget on delivering business value. They want cloud-like services so they can act as hosting providers to their internal customers. Our Virtual Datacenter Operating System Initiative will accelerate customers down the virtualization path so that they can run their IT as an internal cloud service.  The VMware vSphere generation of products, which are currently in development, will be a new class of software that delivers on this strategy. And, as customers become cloud-enabled, they will have the flexibility to securely and efficiently expand their internal clouds to tap the resources offered by external service providers through our VMware vCloud Initiative. I am excited to share the progress we've made on our initiatives at VMworld Europe."

The Private Cloud
A private cloud is a secure computing environment that allows computing capacity from both internal and external clouds to interoperate and be delivered much like a utility.  A private cloud brings unprecedented levels of flexibility, control, efficiency, resiliency, and manageability to datacenters and allows any application – legacy, server-based, desktop or those built on new application frameworks – to be delivered as a service. 

The private cloud brings the benefits of cloud computing under the control of corporate IT, such as:

  • Improved efficiencies through maximum resource utilization of all server, storage and network resources
  • Better resiliency through capacity or fail-over capabilities that are dynamic or on-demand
  • Improved accountability by leveraging a usage-based, pay-as-you-go service model
  • Better quality through standardized auditable and automatically ensured service levels
  • More flexibility through a future-proof platform that supports existing and future applications that require no re-writes or modification to run in the cloud

"VMware is helping make the promise of a self-service datacenter a reality through the VDC-OS and VMware vCloud Initiatives," said Mark Bowker, analyst, Enterprise Strategy Group. "VMware, as a leader in the industry, has the ability to provide the building blocks, create standards, and shape the future of cloud computing. VMware is in a unique position, with its depth and breadth of mature solutions, to help customers build their first iterations of central compute clusters and lead the federation between internal and external computing resources. VMware has demonstrated success with a rich ecosystem of partners that are now anxious to work with the company to build and deliver cloud computing solutions." 
 
Customers Moving To Private Clouds Today
VMware's enterprise customers are anxious to achieve the benefits of cloud computing, and are taking the first step by evolving their datacenters to internal clouds.  bmi, the second-largest airline at London Heathrow, operates services in the UK, Europe, the Middle East, Central Asia, and Africa.  The airline has aggressive targets to move approximately 90 percent of its infrastructure onto a VMware-based private cloud delivered by Attenda, one of EMEA's premier service providers.

"Moving to a cloud computing model will play a key role in our ongoing drive towards reducing the TCO of our IT infrastructure and increasing our operational agility," said Peter Federico, group IT director for bmi.  "Through cloud computing provided by Attenda, VMware's technology gives us the ability to scale our computing capacity on demand to meet spikes in activity during peak hours or when we're running promotions.  We now use the cloud to support our key websites and also a key ground operations system.  Clearly, cloud computing is core to our day-to-day operations."

VDC-OS:  The Foundation for the Cloud
The move to a private cloud is catalyzed by the increasing power and attractive economics of industry-standard x86 hardware, the maturing of virtualization technologies, increasing choice in new application architectures, and the availability of vast new clouds of cheap and readily accessible computing power. The first step is to evolve the datacenter from components of complex infrastructure to a more dynamic, manageable internal cloud. Internal clouds have, at their foundation, a new substrate layer that pools all internal compute capacity – servers, storage, and networking capacity into an internal cloud.  VMware announced its focus on creating this new layer as its VDC-OS Initiative and the company is expected to ship the first instantiation of it in 2009.

The VMware vCloud Initiative Enables Federation between External and Internal Clouds to Create a Highly Elastic Private Cloud
The VMware vCloud Initiative, first announced at VMworld Las Vegas 2008, enables federation between external and internal clouds to provide the elasticity for the private cloud.  VMware vCloud™ technologies equip service providers to become cloud computing providers and offer a range of IT services that companies can tap for increased flexibility, efficiency, resiliency, and manageability of their private cloud.  VMware is working in concert with major service providers to achieve this goal, including industry leaders, such as Savvis, SunGard, Melbourne IT and Terremark. These service providers are either offering or plan to offer vCloud services with the security, service levels, and application compatibility required for enterprises to confidently incorporate them into their private clouds. 

As a proof point of its progress, tomorrow at VMworld, VMware will demonstrate an integration of the VMware Infrastructure Client with external cloud resources at VMware vCloud™ service providers.  This new capability will enable the deployment and management of workloads with VMware vCloud service providers with just a few mouse clicks, in the same management interface as customers use to manage their internal clouds.

Support for New and Existing Applications in the Private Cloud
Private Clouds have the unique capability to run both existing applications and new scale-out applications without rewriting or rearchitecting the applications. The vCloud Initiative allows enterprises to seamlessly take the same existing applications that they are currently running in VMware environments and run them in internal or external clouds with the high availability, manageability, and security that customers have grown to rely on from VMware.  The VMware vCloud Initiative also enables new application frameworks to leverage internal and external clouds and inherit the same availability, manageability, and security benefits. 

VMware vCloud API to Enable Interoperability Across Clouds
A core enabler of the VMware vCloud Initiative's broad application and service provider interoperability is the VMware vCloud API, which allows programmatic access to private cloud resources and support the delivery of services and applications that leverage and extend private clouds.  The VMware vCloud API is in private release and under co-development with partners.  At VMworld Europe 2009, software companies such as Engine Yard and IT Structures will demonstrate new services built on top of the VMware vCloud API which further enable scalable, elastic, portable infrastructure for Web 2.0 and enterprise application stacks.
 
"As a leading Ruby on Rails platform for the cloud, we're excited to be working with VMware to support its vCloud API," said Tom Mornini, CTO Engine Yard. "The vCloud API is exciting because it will allow our enterprise customers to choose between internal and external cloud resources more easily, and is backed by VMware, a trusted vendor."

VMware is committed to open interoperability between cloud services, and is working with many industry partners to advance standards for cloud computing.  As one of the original authors of the Open Virtualization Format (OVF) standard now released from the Distributed Management Task Force (DMTF), VMware will build upon that work by submitting a draft of its VMware vCloud API to enable consistent mobility, provisioning, management, and service assurance of applications running in internal and external clouds.

To listen to a replay of the VMworld Europe 2009 General Session keynotes, please visit:
http://www.vmworldeurope.com/agenda/keynotes/

Appistry Opens the Cloud to (Almost) All Apps

Appistry社が新しい製品としてCloudIQManagerを発表した。 これは、アプリケーションを対象とした自社の仮想技術の適用で、どんなアプリケーションでもCloud Computing環境にもっていく事が出来るツールである、と発表。
 
最近こういったアプリケーションをターゲットとした仮想化ソリューションが多くなってきている。
 

appistry_logo_277x300

Enterprise adoption is the Holy Grail for cloud computing software vendors, and Appistry is prepping to play the role of Sir Galahad. The St. Louis-based company today released its new CloudIQ Manager product,which offers the ability to port nearly any enterprise application to the cloud, and makes it easy to move applications between both in-house and public clouds.

As more companies move toward operating in the cloud, Appistry sees lack of support for existing as a huge gap in the market, particularly for enterprise customers that have hundreds of Java, .NET, C/C++ or legacy applications apps they need to move to the cloud. "They haven't found an application management approach that allows them to easily get there," says Sam Charrington, Appistry's VP of product management and marketing. Most current cloud offerings are designed for Web applications or single-application deployments; that means they're optimal for web startups and SaaS providers, but not as useful for larger, enterprise customers.

CloudIQ Manager aims to solve this enterprise-readiness problem by letting customers create service definition templates that tell the software how to manage each existing application's lifecycle (i.e., how it's installed, started, stopped, etc.) — all from a single interface. Templates and configurations for deploying common application types (Apache or Tomcat, for example) will be available in a user-driven library and will save new users the legwork required to establish best practices.

Compared with VMware's widely publicized hybrid cloud tools, the difference is in the focus: VMware focuses on managing and deploying virtual machines, whereas Appistry focuses on the application. While the entire application stack is locked up in the virtual machine in VMware's model, Charrington says CloudIQ Manager users can update VMs with impunity because the resources are completely abstracted from the application. Elastra recently announced its hybrid cloud plans, too.

CloudIQ manager offers limited intercloud portability, as well, via a drag-and-drop functionality within the console. Users will be able to move their apps between and among both in-house clouds and Appistry's public cloud partners, which currently include Amazon, GoGrid and Skytap.

However, there are still many interoperability issues between the public cloud offerings, and this product only addresses "practical portability" of applications. Charrington notes that the cloud-to-cloud migration feature is designed initially to ensure applications get migrated and deployed successfully in test-dev environments; over time, he says, it will evolve to address production-level concerns like intercloud load-balancing.

Appistry recently announced 200 percent year-over-year growth, which Charrington attributes in large part to its existing products' cloud computing capabilities. Given the positive reports I've heard from other cloud vendors thus far in 2009, I won't be too surprised if Appistry's CloudIQ Platform suite — the new moniker for version 4.0 of its flagship software — has an almost immediate impact on the company's bottom line.

2009年3月10日火曜日

AppZero's Virtual Application Appliances cloud computing simplified

アプリケーション仮想化の技術の新ベンダー。 

サーバアプリケーションの下に仮想OS(zeOS)を置き、アプリケーションと組み合わせた形で複数のOS間をアプリケーションを変更ナシで移動できる、という構造。  現在Windows、Linux、Solarisをサポート。  複数のCloud Computing環境の間も自在に移動できる事もアピール。 

DEMO'09でデモが実施され、好評を博した模様。

アプリケーションの移行性のみならず、現在Cloud Computing環境でサポートされていないさまざまなインフラアプリケーションやMiddleware、(i.e.: DB2、Appsphere、WebLogic、Oracle Apps、等)を簡単にAmazon Web ServiceやGoGridに移行できることが画期的である。


AppZero, formally Trigence, demonstrated a different approach to allowing organizations to take advantage of the computing resources made available by hosting suppliers at Demo 09. Rather than taking the approach that is typically proposed by the suppliers of virtual machine technology, AppZero's approach is to encapsulate applications rather than complete desktop or server environments. Application virtualization software suppliers, such as AppZero, suggest that this approach means that the images projected into the Cloud would be smaller, easier to manage and can be made compatible with a larger number of suppliers of cloud computing infrastructure.

I've written about this technology before and believe it could solve some of the problems organizations have with Cloud Computing. (See Trigence Virtual Application Appliances - point, click, you're done)

Here's what AppZero has to say about their technology

AppZero (formerly Trigence) is launching today at the DEMO 09 Conference a set of tools for creating Virtual Application Appliances (VAAs).  This new approach to provisioning and deploying applications on physical or virtual servers running anywhere, is designed for the cloud environment and for movement of server applications -datacenter to cloud, hosting environment, or cloud to cloud. VAAs package a server application with all of its dependencies, but no operating system component (zero OS).  AppZero's first public demonstration of its VAA technology will show a live production application provisioned in seconds to on an Amazon EC2 cloud, and moved in less than one minute to a GoGrid cloud computing environment.

Designed for instant server-based application provisioning and deployment, VAAs enable an application to run wherever the business requires without the burdensome licensing issues that inclusion of an operating system (OS) introduces - VAAs contain zero OS. AppZero VAAs work with mission-critical applications across all tiers:  web servers, application servers and database servers. Enterprise middleware from Microsoft, Oracle, IBM and Open Source servers like MySQL as well as in-house developed applications can all be easily transformed into VAAs without changing a single line of code.

Cloud providers, integrators, ISVs and IT professionals find AppZero's wizard-based tools simple to use for creating VAAs and provisioning them on servers at the click of a mouse.  This instant provisioning allows scalable resources to be used on a pay-per-use basis, without cloud lock-in.

AppZero software creates, maintains, and administers VAAs.  The key enabler of AppZero's VAA toolset is isolation and encapsulation technology created at Trigence, an early developer of multi-platform datacenter virtualization technology.  Under its new name, AppZero, the company is focused on extending the proven concept of virtual appliances to server applications.

Snapshot analysis

I've often pointed out that virtual machine technology is a powerful tool to use to create a consolidated environment. I've also pointed out that many times it is not the proper tool to use when raw performance, scalability, reliability or availability is the goal (see When is virtual machine technology the wrong choice?). Even when consolidation is the goal, operating system virtualization/partioning might be a better choice if all of the workloads are hosted by the same operating environment.

Application virtualization technology, such as that being offered by AppZero and its competitors,  may be a better choice when the goal is projecting an application that is hosted internally into the clouds for scalability or performance reasons.

AppZero has some clever ways to make Cloud Computing a viable choice without also requiring that organizations become experts in Cloud Computing. That being said, the company faces competition that has different, but equally powerful approaches to application virtualization. AppZero has to make decision-makers aware of their technology and why it is a better approach than that being offered by these and other competitors.

If your organization is considering deploying Cloud Computing, it would be worth the time to visit AppZero's website and view the videos about the use of their technology.

How will the recession affect Silicon Valley? | Tom Foremski: IMHO | ZDNet.com

CassattのCEOとの会見の一部。

Green ITはDisruptive Technologyじゃないので政府のプロモートが無くてはならない、という事を強調する一方、Cloud Computingは正に新しいDisruptiveTechnologyである、と宣言し、新しいITの潮流を作りうる技術である、という事を述べている。 

Disruptive Technologyは既存の技術の10倍の価値を提供する、という効果が前提であると述べている。


I recently spoke with Silicon Valley veteran Bill Coleman about Silicon Valley and its future. Mr Coleman worked at VisiCorp on the first spreadsheet, he headed development of Sun's Solaris, he co-founded BEA Systems which sold to Oracle for $3.5bn, and he now runs Cassatt, a cloud computing tech startup.

I'm sitting with Bill Coleman, and one of his colleagues, in a small conference room in downtown SF. We're talking and also trying to eat our sandwiches.

He is telling a story about being at JFK airport late last year, and getting a call from a reporter at the New York Times, asking if the recession would be better or worse for Silicon Valley compared with 2001.

"I said it will be worse than the previous downturn. The last time about 800 companies went out of business that had no business model. Secondly, the large companies had sold too much capacity and it took a year or two for things to catch up. This time the recession is not about Silicon Valley it is about something else."

He said it would take a long time for capital spending to come back and that this would hurt Silicon Valley.

"I said you'll see accelerating layoffs in the first quarter and that's exactly what we've seen."

What will this mean for Silicon Valley innovation?

Mr Coleman says he's an optimist but he does have some concerns.

"I have a basic theory that Silicon Valley reinvents itself by inventing a new platform layer every 10 years." He says that Silicon Valley was lucky to develop Information Technology (IT), a technology that is becoming cloud computing, a new platform. Information Technology is also vital in driving the development of two additional disruptive technologies: nanotech and biotech. And fortunately, Silicon Valley leads in all three industries.

Silicon Valley also leads in green technology, a large and growing market. But green technology is different — it isn't a disruptive technology. He says that a disruptive technology has to have a characteristic of the Peter Drucker rule in that it provides ten times the value of what it's displacing.

"In the green market, what we're displacing is cheaper per unit than what is displacing it. It won't be driven by a tsunami of adoption." Mr Coleman says the government should come up with incentives for companies to adopt green technologies otherwise progress will be slow.

"People already have the capacity for doing what they do today. So this means that they can put off using green technologies for a long time. There are lots of benefits for humanity, but the economics of the green market won't drive a rapid adoption unless there are incentives."

He points out that a government program focused on incentives to adopt green technologies would provide a more effective stimulus than the current stimulus package, with its focus on building physical infrastructure. Building a bridge, or a road, is a one-time event, and it won't provide long term stimulus. "It slows down capital formation," he says, potentially slowing an economic recovery. If the government helped to expand green technology it would create higher quality jobs and provide other long term economic benefits.

Cloud computing doesn't need government incentives because it is a disruptive technology, says Mr Coleman, especially the next stage, beyond what he terms "cloud 1.0." As the cloud computing platform becomes more sophisticated, he predicts that there will be an acceleration in the use of the cloud driven by a "quadruple conversion." Video, audio, and IT data all become IP based, and productivity applications become integrated with social networks.

"As we move forward from Web 2.0 to Web 3.0, all your productivity tools become integrated with your social networking, which becomes your business networking. Your mobile life and your online life will become the same. So now the client moves into the cloud and that's when we'll see a dramatic change in the cost structure of computing and of the capabilities you can have."

He says that a platform ill be successful if it has three characteristics. It has to be able to commoditize a market. Secondly, it has to obey the 10x better/cheaper rule.

"When I was at VisiCorp, heading software development, each of our developers had a PC. This was faster and dramatically cheaper than using DEC VAXs [minicomputers]. Thirdly, a platform must allow you to add value with custom additions. The reason Netscape wasn't a platform was that no one could program to it, nobody could add value. (By the way, that's also true for virtualization…) Unless you have all three characteristics, you won't have a disruptive chain that can accelerate a startup from zero to sixty, and turn it into a major player."

I mentioned that a characteristic of a disruptive platform, is that it disrupts. If you are in the path of a disruptive technology you can often see what's up ahead, you can see the train wreck, but you can't get off the rails in time, you can't downsize or restructure in time, you hit the train wreck.

"Yes. I call it the DEC spiral. DEC tried to deny the importance of the PC and then when they realized what was happening they couldn't layoff people fast enough to match falling prices. There was little left and they had to sell [in 1998 to Compaq, the top PC company at the time.]" said Mr Coleman.

How Amazon builds the world's most scalable storage

Amazonが提供するストレージサービス、S3の概要とさまざまな障害に対して同対応しているか。

The cloud storage market is accelerating fast - despite naysayers and alarmists - and Amazon's S3 is leading the charge. Storing over 40 billion files for 400,000 customers Amazon is the one to beat. How do they it for pennies per GB a month? Read on.

I attended FAST '09, the best storage conference around, where Alyssa Henry, S3's GM, gave a keynote. Amazon doesn't talk much about how their technology works, so even the little Alyssa added was welcome.

Aggressive goals
A multi-billion dollar business running one of the world's largest websites, Amazon engineers understand the problem. Their goals reflect both technical and market requirements:

  • Durable
  • 99.99 availability
  • Support an unlimited number of web scale apps
  • Use scale as an advantage - linear scalability
  • Vendors won't engineer for the 1% - only the the 80% - DIY
  • Secure
  • Fast
  • Straightforward APIs
  • Few concepts to learn
  • AWS handles partitioning - not customers
  • Cost-effective

One key: Amazon writes the software and builds massive scale on commodity boxes. Reliability at low cost achieved through engineering, experience and scale.

With many components come many failures
10,000+ node clusters mean failures happen frequently - even unlikely events happen.

  • Disk drives fail
  • Power and cooling failure
  • Corrupted packets
  • Techs pull live fiber
  • Bits rot
  • Natural disasters

Amazon's deals with failure with a few basic techniques:

-Redundancy
Increases durability, availability, cost, and complexity. Example: plan for the catastrophic loss of entire data center; store data in multiple data centers.

Expensive but once paid for costly small-scale features like RAID aren't needed.

-Retry
Just like disk drives - it's quicker for Amazon to retry than it is for customers. Leverage redundancy - retry from different copies.

-Idempotency
This is cool. An idempotent actions result doesn't change even if the action is repeated - so there's no harm in doing it twice if the response is too slow.

For example, reading a customer record can be repeated without changing the result. So they don't retry too much there's

-Surge protection
Rate limiting is a bad idea - build the infrastructure to handle uncertainty. Don't burden already stressed components with retries. Don't let a few customers bring down the system.

Surge management techniques include exponential back off (like CSMA/CD) and caching TTL (time to live) extensions.

-Eventual consistency
Amazon sacrifices some consistency for availability. And sacrifices some availability for durability.

-Routine failure
Everything fails - so every failure handling code path must work. Avoid unused/rarely used code paths since they are likely to be buggy.

Amazon routinely fails disks, servers, data centers. For data center maintenance they just turn the data center off to exercise the recovery system.

-Diversity
Monocultures are risky. For software there is version diversity: they engineer systems so different versions are compatible.

Likewise with hardware. One lot of drives from a vendor all failed. A shipment of faulty power cords. Correlated failures happen

Diversity of workloads: interleave customer workloads for load balancing.

-Integrity checking
Identify corruption inbound, outbound, at rest. Store checksums and compare at read - plus scan all the data at rest as a background task.

-Strong instrumentation
Internal, external. Real time, historical. Per host, aggregate. When things go wrong, you need history to see why.

-Get people out of the loop
Human processes fail. Humans are slow. If a human screws up an Amazon system, don't blame the human's fault. It's the system.

Final thought
Storage is a lasting relationship that requires trust.

The Storage Bits take
Amazon is the world leader in scale out system engineering. Google may have led the way, but the necessity to count money and ship products set a higher bar for Amazon.

Amazon Web Services will dwarf their products business within a decade. I'd like to see them open the kimono more in the future.

Comments welcome, of course. There's a longer version of this on StorageMojo. And there's the Amazon CompSci paper Dynamo: Amazon's Highly Available Key-value Store. Not S3 specific, but close.

Cloud Computing Jobs: A Leading Indicator

Cloud Computingに関連した仕事の数が急増している。 
急速な景気の悪化の今日にまるで関係が無いかのような増え方である。 
Cloud Computingが単なるトレンドではなく、確立したビジネスになりつつあり、という事を示すー田として解釈すべき。


Cloud Computing Jobs: A Leading Indicator

Just did a trend search on job site Indeed.com for "cloud computing". Whoa.

Cloudjobgraph

Job postings are often a leading indicator for expected business activity, and this graph speaks for itself. Cloud computing is clearly more than hype when so many companies are hiring for cloud-related positions.  It's also interesting to note some of the companies that show up when you run the search. You get a little bit of insight into the plans of companies such as Dell, Yahoo, Intuit and VMWare.

You can also subscribe to the cloud computing job search RSS feed.

Geva Perry's Blog / Wed, 04 Mar 2009 08:08:42 GMT

Sent from FeedDemon

Additional EC2 Support for Windows - Second Zone in the US and Two Zones in Europe

AWSのEC2がWindows上のSQL Serverをサポートし、複数のゾーンでサポートする事を発表。 
また、欧州での事業展開が本格化し、AMIの欧州言語サポートのほか、さまざまなエンハンスがCebitの時期にあわせて行われる。 
 
.Netのサポートに加え、SQL ServerのサポートがMicrosoft Azureより先に行われる事が非常に印象的。


Additional EC2 Support for Windows - Second Zone in the US and Two Zones in Europe

We've been working to make it possible for you to run Windows or SQL Server in additional locations and to build highly available applications.

You now have the ability to launch EC2 running Windows or SQL Server in the EU-West region, in two separate Availability Zones. You can also launch EC2 running Windows or SQL Server in a second Availability Zone in the US-East region. With the additional of the new European region and the additional US zone you now have the tools needed to build Windows-based applications that are resilient against failure of an availability zone.

 

The AWS Management Console has been updated with full support for the EU-West region. After selecting the new region from the handy dropdown (shown at right), you can launch EC2 instances, create, attach and destroy EBS volumes, manage Elastic IP addresses, and more.

 

We've created new Windows AMIs with the French, German, Italian, and Spanish language packages installed. The Console even provides a new Language menu in the quick start list. Once launched, you simply set the locale in the Windows Control Panel. You can find step by step directions for launching AMIs in various languages here.

The popular ElasticFox tool now lets you tag running instances, EBS volumes, and EBS snapshots. The Image and Instance views have been assigned to distinct tabs and you can now specify a binary (non-text) file as instance data at launch time.

While I'm talking about all things European, I should mention two other items that may be of interest to you. First, Amazon CTO Werner Vogels will deliver a keynote at the Cebit conference in Germany later this week. Second, we have an opening in Luxembourg for an AWS Sales Representative.

Report: Privacy issues plague cloud computing

Cloud Computingベンダーを選択する際、データの保護についての条項を十分に考慮しリスクを正確に見極める事が重要である。 
 

Before turning to cloud computing applications to conduct business, enterprise executives should think twice about the potential for exposure of corporate secrets or legal liabilities, according to a new World Privacy Forum report. 

The report details the privacy and confidentiality risks that can arise from "sharing or storage by users of their  information on remote servers owned or operated by others and accessed through the internet or other connections." 

Enterprise executives must weigh the risks and benefits of cloud computing and analyze both the provider being used and the information being put in the cloud, Robert Gellman, an independent privacy consultant and author of the report told SCMagazineUS.com Monday. Some of the most important issues for companies to consider before engaging in cloud computing are a providers' terms of service, as well as the location and data restrictions on information put in the cloud.

But whether such considerations are taken into account or not, cloud computing will become ubiquitous as employees begin demanding that enterprises use it for productivity reasons, said Peter Evans, director, security strategy and technology integration for IBM ISS.

"Enterprises have to realize the new normal is lots of content, people always on, a lot of information being used for a myriad of reasons -- you can't get past change and innovation," Evans said.

Yet the technology has evolved faster than privacy laws, which don't address the unique cloud computing privacy challenges.

"We're using older laws to protect newer business concepts," Evans told SCMagazineUS.com Monday.

This is one reason it's critical for business leaders to read the provider's privacy information and the terms of service. In some cases, providers have the right to read -- and make public -- information that is put in the cloud. Because companies might be storing documents that should not be made public, there lot of concerns about what can happen to the information, Gellman said.

Also, information stored in the cloud is much more accessible by a private litigant or the government. The reason? Traditionally, if an enterprise has information in its possession that a government wants, the government must come directly to the owner of the information to get it. But if it's in the hands of a third party, the information potentially could be released without the owner's knowledge. In that scenario, the owner of the information wouldn't have been able to object to the disclosure let alone even know their information has been released.

The location of the cloud provider is also an important consideration, Gellman said. If, for example, the cloud provider is located in the European Union, the data could be permanently subject to EU laws. Within the U.S., this same issue applies to different states where privacy laws vary.

"A company needs to be very cautious to allow employees to make ad-hoc decisions to use cloud computing." Gellman said. "Just because you have two different branches [in two different states], you shouldn't just put stuff into the cloud without thinking about it."

Putting certain information in the cloud (such as personal information on customers, for example) could result in a violation of a privacy law, Gellman said. For some information, it may not be a big deal, but for other information, a business may be vulnerable.

It may be difficult to determine if the cloud provider is meeting security standards needed to protect certain data. If the cloud applications are provided for free, it might be more difficult getting information on the security of the services. If the cloud applications are paid, an enterprise might be able to negotiate terms of the agreement to make sure the data will be properly protected.

"I think the IT department needs to talk to the lawyers first and figure out where the vulnerabilities are," Gellman said.

The report did not analyze particular cloud computing providers or give any a "stamp of approval," Gellman explained. This was due to both the enormity of the task, as well as because ultimately it's a company's job to make that determination itself.

Encrypting data that is put in the cloud might solve many of the data privacy issues. But on the down side, it might make it harder to access the data, Gellman said.

"It all depends on who you are, what kind of data you have and what the cons of putting that data in the cloud [are]," Gellman said.

Companies should work with providers that understand the privacy issues of cloud computing, advised Evans. Users should ask providers what sort of security, privacy, and data protection assurances they can provide. Are they PCI certified, for example? Are they encrypting, isolating and separating data? Do they have intrusion prevention mechanisms and authentication controls in place? Are they using logging systems?

"Do the same due diligence on the cloud provider as you do on your own business to make sure you're secure," Evans said.

Appistry and GoGrid Announce Cloud Computing Solution

AppistryとGoGrid社が共同で発表した、Private Cloud、Public Cloudの統合戦略。 
Private Cloud ソリューションを提供するAppistryのAPIを初めとした開発環境をGoGridでサポートする、という内容。

Cloud Computing Infrastructure provider GoGrid and Cloud Application Platform provider Appistry announce the release of Appistry EAF Community Edition within the GoGrid cloudcenter.

GoGrid, the Cloud Computing division of ServePath, LLC and Appistry today released new tools for developers, architects and administrators designed to ease the pain associated with developing, deploying and managing applications in the Cloud. Appistry's Cloud application platform, named Appistry EAF, helps businesses and enterprises efficiently manage and scale their applications within the GoGrid infrastructure. With this joint solution, larger companies are able to take full advantage of the Cloud's unique value proposition of elastic scalability, solid reliability, automated management and CapEx economies.

Appistry EAF Community Edition 3.9 is now available for Red Hat Enterprise Linux 5.1 users. Additional EAF-enabled GoGrid images will be rolling out in the near future. Appistry EAF Community Edition allows developers, system architects and administrators to take advantage of Appistry's Cloud application platform for free on up to five GoGrid Cloud Server instances. Appistry EAF functionality and benefits include:

* Transparent and instant linear scalability
* Application-level fault tolerance
* Broad support for Cloud-enabling software components
* Adaptive, software-based load balancing
* Fully-distributed, fault tolerant memory cache for objects and data
* Fine-grained, hierarchical security model
* Efficiencies in CapEx and administrator time
* Ease of use

More information on Appistry EAF can be found at: www.appistry.com/products/eaf/index.html

"The GoGrid partnership is part of Appistry's strategy to address the complex challenges enterprises face developing, deploying and managing applications in both public and private Clouds," said Sam Charrington, Appistry vice president of product management and marketing. "End-users demand a platform which sits above the infrastructure and allows enterprises to more easily realize its full promise -- elastic scalability, solid reliability and automated management."

The combination of GoGrid's robust and flexible Cloud Computing infrastructure and Appistry's Cloud application platform enables enterprises to capitalize on the inherent advantages of both technologies. GoGrid leads the Cloud infrastructure space with a full assortment of infrastructure capabilities available in the Cloud, including industry standard and best practice implementations of Windows Server 2003 and 2008, Microsoft SQL Server, Red Hat Enterprise Linux and CentOS instances among others, as well as free hardware-based f5 load balancing and hybrid hosting capabilities with Cloud Connect which is particularly efficient for complex Microsoft SQL Server databases.

"The GoGrid and Appistry partnership clearly demonstrates our commitment to helping businesses optimize their infrastructure to gain the advantages of Cloud Computing," said GoGrid CEO, John Keagy, adding "Companies would be foolish to not optimize their business and technology strategies using the power of Appistry EAF and GoGrid's Cloud infrastructure."

About GoGrid

GoGrid is the leading Cloud Computing, hosted, Internet provider that delivers true "Control in the Cloud™" in the form of cloudcenters. GoGrid enables system administrators, developers, IT professionals and SaaS (Software as a Service) vendors to create, deploy, and control load balanced cloud servers and complex hosted virtual server networks with full root access and administrative server control. GoGrid server instances maintain the industry standard specifications with no requirement to learn and adapt to propriety standards. Bringing up servers and server networks takes minutes via a unique web control panel or GoGrid's award winning API. GoGrid delivers portal controlled servers for Windows Server 2003, Windows Server 2008, SQL Server, ASP.NET, multiple Linux operating systems (Red Hat Enterprise and CentOS) and supports application environments like Ruby on Rails. Free f5 hardware load balancing and other features are included to give users the control of a familiar datacenter environment with the flexibility and immediate scalability of the cloud, a "cloudcenter." GoGrid won the coveted 2008 LinuxWorld Expo's Best of Show award. www.gogrid.com

About ServePath

ServePath, a Microsoft Gold Certified Partner, is the leading managed and dedicated hosted server provider, delivering custom solutions and managed services to businesses that require powerful Internet hosting platforms for their production environments. Thousands of companies worldwide look to ServePath for its reliability, customization, and speed. ServePath has a Keynote-rated A+ network and guarantees uptime with a 10,000% guaranteed™ Service Level Agreement. The employee-owned company has been in business for nine years and operates its own San Francisco data center and is SAS70 Type II certified. www.servepath.com

About Appistry

Appistry simplifies cloud computing for the enterprise, opening the door to more agile and scalable IT environments. Appistry's application platform delivers solutions for the complex challenges of building, deploying and managing a wide variety of applications and services for both public and private clouds. Appistry's products are designed specifically for cloud environments, delivering transparent scalability, application-level fault tolerance, and automated management to new and existing applications. Appistry customers include FedEx, GeoEye, Lockheed Martin and Northrop Grumman. For more information about Appistry, please visit www.appistry.com.

The raging dispute about federated provisioning's pros and cons

SaaS事業でID管理がこれから重要に成る、という事。 
 

Security: Identity Management Alert By Dave Kearns , Network World , 02/18/2009
Kearns

Federated provisioning is the topic, and the raging dispute about its pros and cons is today's subject. It started with the comment by Daniel Wakeman (CIO, Educational Testing Service), who said "It's a 'huge shortcoming' that SaaS [Software-as-a-Service] vendors do not embrace 'federated identity management' standards allowing centralized identification and validation of users via a single sign-on process…"

Quest's Jackson Shaw jumped on this remark: "Wakeman has hit the nail on the head. SaaS will only complicate security, audit and compliance if it doesn't effectively address identity management. As he points out, supporting federated identity management would go a long way to addressing those issues..."

Enterprise Architect (for The Hartford Financial Services) James McGovern, the Burton Group's Mark Diodati and Ian Glazer (also of the Burton Group) jumped in to comment. Ian's post drew fire from Oracle's Nishant Kaushik and the battle was on!

You see, Ian started life in identity management with Access360 – one of the original provisioning vendors (swallowed up by IBM in 2002). His point: "…there really ought not to be a concept of federated provisioning. Provisioning an application in the data center must be the same as provisioning an application in the cloud." That's a concept I can get behind.

Nishant's credentials are no less impressive, though: he came to Oracle when it acquired Thor Technologies where Kaushik had been Product Architect and Lead Technologist for Xellerate, Thor's provisioning product. He brings up a "just-in-time" provisioning situation in which a federation server (using SAML) and a provisioning server (using SPML) would need to interoperate and there simply are no standards for that today. Neither SPML or SAML, on their own, could handle the transaction.

But, as Ian bluntly puts it, "The point was made that SaaS apps lack a standards-based provisioning interface, an SPML interface. The fact is the vast majority of applications, SaaS or not, lack a standards-based provisioning interface and this makes dealing with them very much the same."

So it appears to be a case of violent disagreement. Yet we're still not much closer to automated provisioning of clients, customers, vendors and partners, are we? And then there's de-provisioning – the removal of access for users. Glazer says "You don't want that fired sales guy walking away with your customer list any more than you want him walking out the door with your pricing information. To that end, there should be no reason why de-provisioning from an application like Salesforce.com is any harder than de-provisioning from LDAP." But, evidently, it is. Is there a way to solve that problem? Tune in next time…

Feb 25, 2009 09:00 ET

Appirio Expands Operations in Japan, Helps Lead Japan Post Cloud Computing Implementation

Experienced Appirio Team Already Engaged With Largest International Force.com Implementation

TOKYO and SAN MATEO, CA--(Marketwire - February 25, 2009) - Appirio, a products and professional services company accelerating the adoption of on-demand in the enterprise, announced its first international expansion into Japan. As part of this expansion, Appirio announced that it will be working closely with salesforce.com to lead the Force.com platform implementation at Japan Post Network Co., Ltd. Japan Post Network Co., Ltd. offers postal, banking and life insurance services and is the largest employer in Japan, serving over 50 million individual customers and employing over 100,000 people. It is also the largest Force.com cloud computing platform implementation in the world.

"Japan was both a strategic and natural fit for Appirio as we expand internationally," said Chris Barbin, CEO of Appirio. "We have a very experienced management and technical consulting team already on the ground and are working closely in the region with two early innovators in cloud computing -- Japan Post and salesforce.com.SaaS is seeing a lot of growth in Japan, perhaps partly due to the region's historical reliance on custom-built applications which has reduced the entrenchment of traditional off-the-shelf software. Cloud computing platforms like Force.com are a great opportunity to update and extend those applications into new areas."

According to a September 2008 user survey published by industry analyst firm Gartner, Inc., 60 percent of respondents in Asia Pacific said they plan to increase their investment in SaaS subscriptions, integration and consulting in the next two years. More than 50 percent of Asia Pacific respondents in the same survey said they were currently transitioning from a current on-premise solution to a SaaS solution.*

"We see huge potential for cloud computing and the Force.com platform in our organization, and Appirio's proven success with enterprises and insight into the platform's capabilities will help us realize that potential," said Akira Iwasaki, CIO of Japan Post Network Co., Ltd. "Appirio has already completed a strategic analysis of our existing applications infrastructure and they are helping us plan our ongoing migration to the Force.com platform. That's just the beginning."

"Appirio has been a strong partner of salesforce.com over the last few years, working on some of our largest and most sophisticated enterprise implementations throughout the rest of the world," said Carl Schachter, President and COO, Asia Pacific and Japan, salesforce.com. "This, along with Appirio's knowledge, regional experience and willingness to invest in building out its Japanese team were among the reasons we selected them as a key partner in Japan."

Appirio's Japan office is being led by Jason Park, managing director of Japanese operations at Appirio. Prior to this, Mr. Park led Appirio's West Coast U.S. sales efforts, growing revenue in that region by 100 percent in 2008. Mr. Park has extensive experience in Japan and the greater Asia Pacific region. Prior to joining Appirio, he was a sales and consulting executive at Borland Japan and served in a number of management positions at IBM (Asia Pacific and Japan).

About Appirio

Appirio (www.appirio.com) provides products and services that help enterprises accelerate their adoption of on-demand. Appirio has a proven track record of delivering business value to customers by implementing mission-critical Software-as-a-Service (SaaS) solutions based on platforms such as Salesforce and Google Apps, and developing innovative applications that connect and extend today's leading on-demand platforms. Appirio was founded in 2006, is the fastest growing partner of salesforce.com and Google, and is backed by Sequoia Capital and GGV Capital.

* Gartner, Inc. "User Survey Analysis: Software as a Service, Enterprise Application Markets, Worldwide, 2008" by Sharon Mertz et al, October, 2008.