2009年6月20日土曜日

Your Thoughts: How mature are cloud computing services?

Cloud Computingベンダーの分類
 

Enterprise IT infrastructure & operations professionals have many cloud computing technologies to choose from today, and new solutions seem to appear all the time. What are all these technologies? How do you categorize them? Which are mature and which need a lot of work?

Forrester is kicking off a TechRadar on the topic and wants your input. A Forrester TechRadar attempts to provide clarity about the types of technologies in a given category and plot their maturity today and the pace at which it is improving, as well as the level of business value this type of technology will bring to enterprise IT.

Forrester defines cloud computing as: a standardized IT capability (services, software, or infrastructure) delivered via the Internet in a pay-per-use and self-service way. As a starting point, we have excluded Software as a Service (as Liz Herbert did a great TechRadar on SaaS already) and have carved up the rest of the cloud services into the technology categories below. Do we have them right? Are we missing any? If you have experience with any of the products in these categories (or others we didn't mention) we want to hear your thoughts about them. How ready do you think these services are for enterprise consumption? Are they maturing quickly or is this area a wait and see?

Drop us a comment below or contact me directly at jstaten@forrester.com or on Twitter at Staten7. And thanks for your contributions to Forrester research.

Cloud computing technologies to be included in this report are:

Technology category Subcategory Examples (not exhaustive)
1. Infrastructure-as-a-Service platforms   Amazon Web Services EC2, The Rackspace Cloud, GoGrid
2. Software Platform-as-a-Service   Windows Azure, Google App Engine, Force.com
3. Cloud Infrastructure Services   Infrastructure IT services delivered from the cloud
  3a. Storage-as-a-Service Nirvanix, Amazon S3
  3b. Disaster Recovery-as-a-Service SunGard Virtual Server Replication
  3c. Backup-as-a-Service Iron Mountain LiveVault, i365 Evault, IBM Business Continuity and Resiliency Services
4. Cloud Application Services   Application services delivered from the cloud
  4a. Database-as-a-Service Google BigTable, Amazon SimpleDB, MS SQL Data Services
  4b. Cloud billing services Google Payment, Amazon DevPay, Zuora Zcommerce
  4c. Integration-as-a-Service Amazon Simple Queuing Service, Boomi, CastIron, Informatica,
Linxster, Online MQ, OpSource Connect, Pervasive
  4d. Business Process Management-as-a-Service Appian Anywhere, Intensil, Skemma
5. Cloud Management Software   Appistry, CloudSwitch, Elastra, RightScale
6. Cloud Labs   Citrix C3 Lab, Electric Cloud, SkyTap, Surgient Cloud 
7. Desktop-as-a-Service   Desktone, MokaFive, Simtone

HP Takes Aim at Cisco with Alcatel-Lucent Alliance

HPがAlacatel/Lucentとのパートナーシップを一層強化し、激化するデータセンタ関連事業において、Ciscoとの競合に対抗するための発表。 
 

 
HP's 10-year alliance with Alcatel-Lucent is the latest move by the tech giant in its growing competition with Cisco Systems in the data center space. Cisco's move into the data center put a strain on its partnership with HP, which has been looking to build up its own networking capabilities through its ProCurve business. The Alcatel-Lucent alliance will add to what HP can do in networking. It also will help HP build up its cloud computing capabilities, which will let HP keep in stride with such vendors as IBM and Cisco in that space.

Hewlett-Packard's 10-year alliance with telecommunications equipment maker Alcatel-Lucent gives the technology giant another weapon in its data center competition against Cisco Systems.

It's also another step for HP as it tries to build up its cloud computing capabilities.

HP and Alcatel-Lucent announced their alliance June 18, a move that HP officials say enables customers of both companies to take advantage of the ongoing convergence of IT and telecommunications. HP officials expect that the partnership—once the definitive agreement has been executed—could generate billions of dollars in revenues over those 10 years.

Though neither company offered significant details of the deal, both said they will jointly market their products in a move to help businesses move their telecommunications networks into more converged infrastructures.

HP and Alcatel-Lucent also will offer services around the joint offerings, and Alcatel-Lucent's products in such areas as IP telephony, unified communications, mobility, security and contact centers will be integrated with HP's technology offerings. Those integrated products will be offered via resellers or as services.

"We expect customers will be able to create new business opportunities and greater efficiencies from this alliance," Mark Hurd, HP chairman and CEO, said in a statement. "By combining our deep expertise in IT and communications, HP and Alcatel-Lucent will help customers transform their technology needs into a competitive edge."

One analyst said the alliance was a smart move by HP, which is finding itself in a heated competition with Cisco in the data center, and also needs to catch up to Cisco and IBM in the area of cloud computing.

Cisco, long known for its networking capabilities, made a significant step into the data center when it unveiled its UCS (Unified Computing System), an all-in-one offering that includes Cisco-branded blade servers and networking technology as well as capabilities from such partners as VMware, EMC and Intel.

The move also alienated other partners, in particular HP, which shot back soon afterward with its own all-in-one data center offering, the BladeSystem Matrix.

"Cisco really took the gloves off with HP with the UCS, and that was intended," said James Staten, an analyst with Forrester Research.

The growth of converged data centers and cloud computing is putting a greater premium on networking, and while HP has been investing in its ProCurve networking products for a while, it can't match up with what Cisco has to offer, Staten said.

Teaming up with Alcatel-Lucent expands those networking capabilities, Staten said.

"This [alliance] is much more a hedge against Cisco than a cloud play," he said, though it also will benefit HP in that arena.

HP's networking capabilities primarily lie within the data center, with switches that enable such communications as those between racks of server or between PCs and remote offices. What it didn't have—before the Alcatel-Lucent partnership—were core switches that essentially allow networks to communicate or aggregate networks. Cisco has always had both, and such networks are increasingly important as data centers evolve and cloud computing becomes more popular.

It also means that HP can lessen its reliance on Cisco as a partner, Staten said. Cisco is too big of a player in the networking world for HP to end its partnership following the UCS move, but HP wanted to stem the growth of that partnership, and now can do that to some extent, Staten said.

On the cloud computing side, HP needs to find a way to catch up to what Cisco and IBM can offer. IBM is making a strong play in that arena. In February, IBM created a cloud computing division, and it further expanded on its cloud strategy June 16.

One advantage IBM has in this area is its Global Services unit, which not only offers services to customers but also hosting capabilities. It's not a huge stretch to go from hosting to cloud computing, Staten said.

Right now, HP only has the services. HP needs to build up its offerings, and the Alcatel-Lucent alliance can help it in the networking side. Enterprises are increasingly going to be looking for ways to have not only their own private cloud environments, but also the ability to use public clouds, and to move services between the two. That is going to take significant networking capabilities, he said.

Skytap Collaborates With HP to Provide Enterprise Testing Capabilities in the Cloud

Cloud Computing市場におけるアプリケーション開発、テスト環境を提供するSkytap社が、HPとパートナシップを発表し、HPのBTO(Business Technology Optimization)ソリューションと組み合わせた事業展開を行う事を発表。

Skytap, Inc., a leading provider of cloud-based IT labs, today announced collaboration with HP to provide cloud testing capabilities that leverage HP's Business Technology Optimization (BTO) solutions. Skytap also announced integration with HP Quality Center and HP LoadRunner software to enable existing customers to more easily leverage the cloud for enterprise software testing.

"More enterprises are turning to cloud computing to reduce business costs and provide greater flexibility and scalability of services throughout the enterprise," said Scott Roza, CEO of Skytap. "Skytap provides a cost-effective, agile solution for testing the functionality and performance of applications using a cloud-based IT lab. Our collaboration with HP provides customers with a best of breed solution for testing that leverages HP's BTO offerings and Skytap's cloud platform."

Skytap's integration with HP LoadRunner supports the dynamic scale out of load generators in the cloud to cost-effectively simulate hundreds or thousands of virtual users, and allows customers to test the performance and scalability of applications and systems before they are deployed. Skytap also announced integration with HP Quality Center to ship in Q3 2009, which enables software teams to set up, manage, and tear down Skytap test lab environments directly from the HP Quality Center interface. This dramatically improves the efficiency of the software testing process and enables software teams to remove lab provisioning bottlenecks by deploying test environments dynamically in the cloud when in-house resources are unavailable.

Through the collaboration, customers and partners can utilize Skytap's cloud-based lab for pre-production testing in combination with the HP Cloud Assure offering, which is an HP Software-as-a-Service (SaaS) offering designed to help businesses safely and effectively adopt cloud-based services. Customers can now utilize HP Application Security Center and HP Performance Center on SaaS to test applications provisioned in Skytap's cloud-based labs for performance and security before deploying to production. They can also use HP Business Availability Center on SaaS to monitor the applications in production.

"Customers can reduce costs by gaining faster time to value with our SaaS for BTO offerings. These are designed to ensure the quality, performance and availability of cloud services," said Tim Van Ash, director of products for HP Software-as-a-Service, Software & Solutions, HP. "The combination of Skytap and HP gives customers access to a cloud-based IT lab, which enables them to further accelerate their testing cycles and significantly reduce costs."

"MindTree applauds these integrations as they allow us to provide accelerated validation capabilities in the cloud environment," said Subodh Parulekar, co-head - independent testing at MindTree, a Skytap and HP partner. "This significantly reduces complexity for our customers in rolling out reliable, scalable, and secure business applications."


Skytap Collaborates With HP to Provide Enterprise Testing Capabilities in the Cloud

By John | June 17, 2009

Press Release…

SEATTLE, WA — 06/17/09 — Skytap, Inc., a leading provider of cloud-based IT labs, today announced collaboration with HP to provide cloud testing capabilities that leverage HP's Business Technology Optimization (BTO) solutions. Skytap also announced integration with HP Quality Center and HP LoadRunner software to enable existing customers to more easily leverage the cloud for enterprise software testing.

"More enterprises are turning to cloud computing to reduce business costs and provide greater flexibility and scalability of services throughout the enterprise," said Scott Roza, CEO of Skytap. "Skytap provides a cost-effective, agile solution for testing the functionality and performance of applications using a cloud-based IT lab. Our collaboration with HP provides customers with a best of breed solution for testing that leverages HP's BTO offerings and Skytap's cloud platform."

Skytap's integration with HP LoadRunner supports the dynamic scale out of load generators in the cloud to cost-effectively simulate hundreds or thousands of virtual users, and allows customers to test the performance and scalability of applications and systems before they are deployed. Skytap also announced integration with HP Quality Center to ship in Q3 2009, which enables software teams to set up, manage, and tear down Skytap test lab environments directly from the HP Quality Center interface. This dramatically improves the efficiency of the software testing process and enables software teams to remove lab provisioning bottlenecks by deploying test environments dynamically in the cloud when in-house resources are unavailable.

Through the collaboration, customers and partners can utilize Skytap's cloud-based lab for pre-production testing in combination with the HP Cloud Assure offering, which is an HP Software-as-a-Service (SaaS) offering designed to help businesses safely and effectively adopt cloud-based services. Customers can now utilize HP Application Security Center and HP Performance Center on SaaS to test applications provisioned in Skytap's cloud-based labs for performance and security before deploying to production. They can also use HP Business Availability Center on SaaS to monitor the applications in production.

"Customers can reduce costs by gaining faster time to value with our SaaS for BTO offerings. These are designed to ensure the quality, performance and availability of cloud services," said Tim Van Ash, director of products for HP Software-as-a-Service, Software & Solutions, HP. "The combination of Skytap and HP gives customers access to a cloud-based IT lab, which enables them to further accelerate their testing cycles and significantly reduce costs."

"MindTree applauds these integrations as they allow us to provide accelerated validation capabilities in the cloud environment," said Subodh Parulekar, co-head - independent testing at MindTree, a Skytap and HP partner. "This significantly reduces complexity for our customers in rolling out reliable, scalable, and secure business applications."

2009年6月19日金曜日

IDC: System Management going SaaS

IDCのレポートで、企業のオンサイトのデータセンタやITインフラのシステム、業務管理をSaaSモデルで行う、というビジネスモデルが最近成長している、という内容。 
 
数社既にこの分野で実績を上げていて、今後市場として伸びる可能性が高い、と予測されている。

"Worldwide System Management SaaS 2009 Vendor Analysis: Economic Crisis Creates Opportunities" is an excellent recent report by IDC.

Software as a Service started in consumer web, and then expanded into end-user-oriented business and collaboration sites (Salesforce.com, Google Apps). The question is whether the model can go from this to administrative tools so IT people can start using cloud services to manage their local on-premise systems they have.

As paradoxical as it sounds this actually makes a lot of sense because a lot of small-/medium-sized just cannot afford maintaining all the infrastructure required to run these system management solutions (servers, backups, redundancy, databases, reporting engines, patching all of that, and so on.) SaaS delivery model offers a more cost effective model and the ability to resell the product as service via service providers.

What's more, according to an IDC survey quoted in the report most of the enterprise customers are either approving SaaS model for system management or neutral to it – which means that the model can grow beyond the SMB space.

IDC also surveyed a bunch of existing system management vendors to see their SaaS roadmap:

  • CA – which created their On-Demand Business Unit and is already offering SaaS solutions for SMB disaster recovery, Project & Portfolio Management (PPM), governance, risk and compliance (GRC) service, and a network monitoring solution.
  • HP – already boasting a big portfolio of SaaS solutions: ranging from project management to configuration discovery and management (CMDB). The company claims to have some 600 active SaaS customers.
  • IBM – having SaaS products from their Micromuse (event monitoring) and MRO (asset management and service desk) acquisitions, and trying to adapt these and other technologies they own for private/on-premise cloud-like systems, and public cloud model (mostly for their own services).
  • Microsoft – announced online IT management and security subscription services for 2010.
  • Symantec – providing SMB-oriented online backup Symantec Protection Network.
  • BMC Software – so far only supplying products for service providers' operations but expected to enter the market.

And a few entrants:

  • NimSoft - working on SaaS remote BSM monitoring and reporting service.
  • Kaseya – offering managed service providers (MSPs) automated managed services for hardware and software discovery, inventory, patch management, user state management, monitoring, and help desk integration.
  • InteQ – providing online ITIL-based Service Desk solution.

And finally the report has IDC's predictions on which system management tools will get to the cloud first and which will probably only get accepted later in the adoption cycle.

All in all, this is a great report and a highly recommended read if you have extra $3,500 or are an IDC subscriber. Check it out here.

Oracle Coherence vs Gigaspaces XAP

Oracle社のCoherenceとGigaSpaces社のXAP、という2つのGrid Computingソリューションの比較を行った記事。  Coherenceの方は比較的Read OnlyのWebアプリケーションに向いている一方、GigaSpace XAPはOLTPアプリケーションなど、フレキシビリティが要求されるアプリケーションに向いている、という評価

I've been fortunate enough to work (read: get to play) with two leading data/computing grid solutions in commercial projects over the last year — so here's a short summary of differences between Oracle Coherence and GigaSpaces XAP.

If this is a topic that interests you, you might also be interested in attending a free one-day conference in London on cloud and grid technologies on 9th of July (see gamingscalability.org for more information). At the event I'll present an experience report from one of the projects I've mentioned here and go into much more detail on what we got out of it.

Data Cache

Both systems support deploying a data grid on multiple machines and automatically manage routing, fault-tolerance and fail-over. Both grids support passive data caches and sending code to be executed on the node where a particular object resides rather than pulling the object from the cache and then running a command. Both grids support querying objects by their properties.

Both grids support event-driven notifications when objects are added or removed from the space. Coherence has notifications that go out to external clients (UI apps, for example) and supports continuous queries that will send updates without polling from the client. Gigaspaces only has notifications internally in the grid, meaning that you can set up a processor to receive events about new, updated and removed objects matched by a particular template.

Both systems have concepts of local caches for remote data partitions which automatically update when the remote data changes (Coherence calls this "near cache"). Coherence supports lots of caching topologies which can be flexibly configured, but Gigaspaces only supports a local partition, global space and local cache of the global space. Local caches in Gigaspaces are really for read-only access (reference data).

Both grids support .NET/Java interop to some level. Coherence does this by requiring you to specify a serializer implementation for your class on both ends, in which you basically just need to specify the order in which fields are serialized and deserialized. I haven't tried out the Gigaspaces solution for interop. According to the documentation, if you follow a naming convention in both places (or override it with attributes and annotations), the grid will transform POJOs to POCOs and back fine. Again, without trying this myself I cannot actually tell you if it works or not.

Processing

Gigaspaces doesn't just allow you to send code to the objects, it is actually designed around an event-driven processing model where objects are sent to processing code and the same processing code runs in each data partition. Events for processing are specified by example, matching templates on classes and non-null properties, and Gigaspaces manages thread pools and other execution aspects for you automatically. Events can be triggered by the state of entities in the grid, or by commands coming to be executed on the grid. It also has a fully transactional processing model, so if an exception gets thrown everything rolls back and another processor will pick up the command from the space again. It integrates with Spring transactions so transactional processing development is really easy.

Coherence has a reference command pattern implementation, not part of the basic deployment but as a library in the Coherence Incubator project. Its allows you to send commands as data grid objects to entities in the data grid but cannot directly invoke events based on entity properties in the grid. Coherence also has a very limited support for transactions – the JCA container gives you a last-logging-resource simulation but no real transactional guarantees.

Deployment

Gigaspaces is designed to replace application servers, so it has a nice deployment system that will automatically ship your application code across the network to relevant nodes, and cloud deployment scripts that will start up machines on EC2 as well. Until recently the scripts were a bit unreliable but version 6.6.4 fixed it. Coherence does clustering itself, but it was not intended to replace application server functionality. When it comes to deployment, you have to do it yourself. There is a JCA connector for application servers which I've tried with WebLogic (and version 3.3 of Coherence finally works out of the box with this), but there are lots of reasons why you do not want to run the whole grid inside WebLogic clusters or something similar, but have it as a separate cluster.

On the other hand, Coherence has a pluggable serialization mechanism (POF) which would theoretically allow us to run multiple versions of the same class in the grid and hotdeploy a new version of the application on nodes incrementally and without downtime (I haven't tried this myself yet, though, so I don't know whether it really works like that). Gigaspace applications (processing units) are split into two parts – a shared library distributed to all the applications and the processing unit specific code. Shared libraries cannot be redeployed after the grid starts, so a hot-deployment of processing unit specific code is fine, but not for any data that is actually stored in the grid. This is apparently going to be changed in version 7. Until then, the best bet for hot deployment on Gigaspaces is to split out the data format and business logic into separate classes and JARs. I'm not too happy about this, but once the class loading changes we might go back to nice object design.

Scaling

Both grids seem to scale enough to deal with problems which I am fighting with (order of magnitude 10 computers in a grid, haven't tried them on deployments of hundreds). However, Coherence scales dynamically — you can add more nodes to the cluster on the fly, without stopping the application. This allows you to scale up and down on demand. Gigaspaces deploys data to a fixed number partitions and fixes it for the lifetime of the data space. If a machine goes down, a backup partition will take over and on clouds you can even have a new machine instance started up for you automatically, but you cannot increase or decrease the number of partitions after the grid has started.

Persistency

Both grids have read-through and write-through support. Gigaspaces comes with a Hibernate-based asynchronous persistency system with mirroring (allowing you to move database writes to a separate node) out of the box. Although the idea is nice, in the current incarnation it has quite a few rough edges so we ended up rolling out our own. For real read-through and write-through to work on Coherence you need to ensure that you configured and deployed persistency code to all the nodes, which might be a bit of a challenge if a part of the grid is running in an application server and a part of the grid is running outside of application servers (especially with non-coherence clients). Since Gigaspaces handles the deployment for you, it makes it a bit easier to run configurations such as these. Gigaspaces also has the concept of an initial load that will pre-populate the memory space with objects from the database and supports on-demand cleanup from the grid without deleting objects in the persistent store.

So when should you use what and why?

There is no clear winner in this comparison because these two products seem to be suited to different problems. Gigaspace is a heavyweight replacement for application servers and in my view best suitable for distributed transactional processing. Its processing model is much more flexible than the one in Coherence and has more features, not least proper transaction support. Coherence seems to be a much better solution for passive read-mostly data grids. It can grow and shrink dynamically. It supports much more flexible topologies and has more powerful libraries for client applications.

Cloudera raises $6M more for serious data processing

HadoopをベースとしたCloud Computingサービスを開発したベンダーの一つ、Cloudera社が$6Mの投資を受けた。  ClouderaはHadoopを最初に商品化したベンダの一つで、最近はAmazon Web Serviceでもサポートされ始めている。 

Cloudera, a startup that helps companies process large amounts of data using an open source platform called Hadoop, has raised $6 million in a second round of funding.

The San Francisco company has an impressive founding team, including high-level folks from Facebook, Google, and Yahoo. Previous backers include Accel Partners, former VMware chief executive Diane Greene, former MySQL chief executive Marten Mickos, and Facebook chief financial officer Gideon Yu. Cloudera CEO Mike Olson says the eight-month-old startup has "tens of customers," and is seeing growing interest in Hadoop, which is a relatively new technology. That interest comes from a wide range of markets, Olson says — web and mobile advertising, financial services, bioinformatics, government, and more.

Cloudera was the first company to offer commercial distribution and services for Hadoop, but recently Amazon also started selling infrastructure for Hadoop processing through a service called Amazon Elastic Map Reduce. Cloudera isn't competing directly with Amazon, since the startup isn't selling infrastructure, and in fact allows customers to use Amazon.

The company announced its $5 million first round in March. Why is it raising more money so soon? Olson says that first round actually closed back in October. Also, Cloudera didn't need to raise the money now, but it just seemed like a good opportunity to accelerate the company's growth, he says. The new round was led by Greylock Partners, with participation from Accel.

Google and Salesforce.com Join Clouds

GoogleとSalesforce.comが共同発表を行い、GoogleのAppEngineからSalesforce.comのデータをアクセスできるような連携が出来る事が明らかにされた。 
 
Cloud Computing事業が双方のアクセスをサーバサイドで行うようにする動きは多くなると想定される。 

Google and Salesforce.com said today at the Google I/O Developer Conference that their platforms as a service will talk with one another. Using the libraries provided by Force.com for Google App Engine, developers can now access the data stored in the Salesforce.com cloud from inside Google's App Engine. This is a powerful vote of confidence for Google's App Engine, because it allows developers building applications on Google's platform to access the Force.com platform that enterprise customers are comfortable using.

Google is trying to use App Engine as a way to draw corporate users to its other online products such as Google Docs or Enterprise search. In addition to being a platform where any developer can build scalable applications, App Engine is one way for corporate IT departments to easily customize the other Google products — something large corporate users could be keen on doing. Now that App Engine can access information stored in Force.com (which can include a company's Salesforce.com data), internal IT departments can add even more whiz-bang features to the programs they build on App Engine. It also marks the beginning of a world where multiple clouds talk to one another, and management of information between those clouds becomes more important.

App Engine competes against Microsoft's Azure and Rackspace's Mosso, rather than the basic infrastructure as a service offered by Amazon's Web Services.

Amazon to Open Source Web Services API's

Amazonの進めようとしている、と噂されている自社のAPIの公開についての分析。 
 
API公開によって、既に事実上の標準仕様となっているEC2とS3に対して、さらに市場を広げる効果がある。 
 
また、Private Cloud環境の構築の促進にもつながり、Enteprise市場でのAWSの採用が広がるきっかけになる可能性も高い。  その例として、SunのOpen Cloud PlatformやEucalyptis Project等の開発がさらに進む可能性が高い。 
 
このコンセプトの延長線上にUniversal EC2 API Adaptor、という事が議論されている。  これは、ODBCと非常に似たコンセプトで、さまざまなプラットホームにプラグインのライブラリのような形を取り、そのプラットホームのインタフェースに全く依存しないAPIを提供する技術の事を指す。  このプラグインが導入されればどのような環境上でもEC2のアプリケーションが稼動する、という仕掛け。

I usually try to avoid posting rumors but this one is particularly interesting, I first heard about it a few weeks back but recently had independent confirmation. Word is Amazon's legal team is currently "investigating" open sourcing their various web services API's including EC2, S3 etc. (The rumor has not been officially confirmed by Amazon, but my sources are usually pretty good)

If true, this move makes a lot of sense for a number of reasons. The first and foremost is it would help foster the adoption of Amazon's API's which are already the de facto standards used by hundreds of thousands of AWS customers around the globe thus solidifying Amazons position as the market leader.

By actually giving their stamp of approval, they would be in a sense officially giving the opportunity for other players to embrace the interface methods while keeping the actual implementation (their secret sauce) a secret. If anything this may really help Amazon win over Enterprise customers by enabling an ecosystem of compatible "private cloud" products and services that could seamlessly move between Amazon's Public Cloud and existing data center infrastructure.

This would also continue the momentum started by a number of competitors/partners who have begun adopting the various AWS API's including Sun Microsystems in their Open Cloud Platform and the EUCALYPTUS project.

From a legal standpoint this would help negate some of the concerns around API liability. Amazon is known to have an extensive patent portfolio and in past has not been afraid to enforce it. A clear policy regarding the use of their API's would certainly help companies that up until now have been reluctant to adopt them.

Lastly this provides the opportunity to foster a ecosystem of API driven applications to emerge (
EUCALYPTUS is perfect example). Another possible opportunity I wrote about awhile back is the creation of a Universal EC2 API adapter (UEC2) that could plug into your existing infrastructure tools and is completely platform agnostic.

At the heart of this concept is a universal EC2 abstraction, similar to ODBC, (a platform-independent database abstraction layer). Like ODBC a user could install the specific EC2 api-implementation, through which a lightweight EC2 API daemon (Think Libvirt) is able to communicate with traditional virtual infrastructure platforms such as VMware, Xen, Hyper-V etc using a standardized EC2 API. The user then has the ability to have their EC2 specific applications communicate directly with any infrastructure using this EC2 Adapter. The adapter then relays the results back and forth between the the other various infrastructure platforms & API's. Maybe it's time for me to get moving on this concept.

HP Helps Verizon Business Build Best-in-class Cloud Computing Solution for Medium- to- Large-sized Companies

HP社がネットワークキャリアの大手、Verizon社と事業提携を組み、Computing As A Serviceと称し、Cloud Computingサービスを企業向けに提供する事業を開始。  HPはさまざまな運用周り、特に自動化関連のソフトウェアサービスを提供する。 
 
AT&T社は既に昨年8月からSynaptic Hosting、というサービスビジネスを開始している。 

HP (NYSE: HPQ) today announced its collaboration with Verizon Business, a unit of Verizon Communications, to bring to market the company's new Computing as a Service (CaaS) solution, offering enterprises a flexible, secure and cost-effective way to manage IT resources.

HP is providing Verizon Business with a combination of professional services expertise, enterprise server hardware and automation software. HP's integrated support provides Verizon Business the foundation to deliver one of the industry's most comprehensive cloud computing solutions.

"Our new CaaS solution offers enterprises an unbeatable combination of flexibility, security and control," said Nancy Gofus, senior vice president, Global Product Development, Verizon. "HP's industry leadership and strategy around 'everything-as-a-service' is enabling us to deliver what we believe is truly a best-in-class cloud solution."

HP Software & Solutions (HPSS) Professional Services provided the expert domain knowledge to support Verizon Business' vision of the CaaS architecture. HPSS Professional Services assisted with the design and implementation of Verizon CaaS.

Based on Verizon Business' requirements, HP's offering included HP Business Service Automation software and HP enterprise servers and network infrastructure products. The complete solution delivers a highly resilient, on-demand computing infrastructure that enables Verizon Business' large and midsize customers to obtain flexible computing resources required to meet the changing needs of their businesses.

The Verizon CaaS solution offers customers:

  • speedier deployment to accelerate revenue generation;
  • built-in redundancy and security for a highly reliable it environment;
  • a 'pay-as-you-go' model that allows customers to pay for what they use, when they use it;
  • fast access to efficiently and securely manage IT resources.

The HP products within the CaaS solution

  • HP Operations Orchestration software – automates IT processes to help customers reduce the time, costs and risk associated with managing IT operations. Automated processes include incident resolution, change orchestration and routine maintenance tasks standardized in a manner that enforces compliance processes.
  • HP Server Automation software – reduces operating costs and increases productivity by automating common operational tasks. Provides automated, physical and virtual server management.
  • HP Network Automation software – allows customers to realize measurable cost savings, improve security and help achieve network compliance. Delivers real-time visibility, automation and control of dynamic networks. The software immediately addresses critical IT issues by simplifying the management of complex, distributed, multiple-vendor networks.
  • HP Service Automation Reporter software – helps customers quickly audit and report on the configuration and compliance data gathered throughout IT enterprise. Offers a flexible reporting engine for data centers and records all operational data associated with change information. It also offers open interfaces to extract and integrate HP Service Automation Reporter data with other Configuration Management Databases (CMDBs).
  • HP BladeSystem – helps customers to save on energy consumption, space and time. Provides compute, storage, networking, power, cooling and management infrastructure elements.
  • HP Virtual Connect – customers gain increased productivity, and can respond faster to changing workload demands which ultimately reduce operating costs. Simplifies connections to local area networks (LANs) and storage area networks (SANs), consolidates and precisely controls their network connections. Administrators can easily move and add applications, and replace or recover server resources on the fly.
  • HP System Insight Manager – maximizes uptime by offering inventory reporting and provides continuous infrastructure monitoring.

"Verizon Business is able to deliver a truly unique cloud computing infrastructure that businesses can leverage to more effectively and efficiently manage IT computing resources," said Andy Isherwood, general manager and vice president, Software Services, HP. "Using HP expertise, software and hardware, HP delivered the innovation and excellence that Verizon Business required."

The Future of Enterprise Software: Hara: Cool Software For a Warm Planet

Haraと呼ばれる新興企業が注目を浴びている。 
企業のエネルギー消費、とその対策についてソリューションを提供する事に特化しているベンダーで、Kleiner Perkins Caufield &Byers等の資本が入っている。
 

Hara: Cool Software For a Warm Planet

At the stroke of midnight, Hara Software emerged from stealth mode after 18 months of incubation at the headquarters of Kleiner Perkins Caufield & Byers. Pronounced "ha-RAH," or "hurrah" to a Bostonian, Hara is the Sanskrit word for "fresh green." This is very fitting given the company's focus on helping organizations track, manage, and optimize the use of energy, fossil fuels, carbon, water, waste, and other resources.

This morning's announcement includes the launch of the Hara Environmental and Energy Management (EEM) suite. EEM is a software-as-a-service offering that includes modules for aggregating data on resource consumption and spend, helping companies plan new initiatives such as reducing their carbon footprint, assisting in the execution of environmental and energy programs, and best practices for continuous improvement. 

There is also a content play. In the press release, Hara's CEO Amit Chatterjee described how his firm intends to work with customers to "write the encyclopedia of environmental efficiency." This reference material will include a guide to managing "organizational metabMcKinsey,olism," or how energy is consumed and expended by an enterprise.

What makes Hara different is its focus on end-to-end energy and emission management. Most competitors have focused exclusively on emissions, emission offsetting, and Global Reporting Initiative (GRI) sustainability reporting.

In mid-April, Dr. Stephen Stokes and I flew to California to visit the Hara executive team at Kleiner's offices in Sand Hill Road. During our visit, Mr. Chatterjee discussed some of the work underway at the dozen customer accounts and demonstrated the new software. When asked for his impression based on the demos, Stephen said, "Cool software for a warm planet." That said it all.

Hara has a strong management team, with veterans from Agile Software, McKinsey & Company, Oracle, SAP, and TIBCO. This is complemented by a strong board of directors and an advisory board. The latter includes four academics with backgrounds in various environmental disciplines.

The EEM market promises to attract a lot of attention. Last month SAP announced the acquisition of Clear Standard, a provider of software for carbon management and sustainability. Meanwhile, former Siebel Systems founders Tom Siebel and Pat House have started C3 which is still under the covers. The only publicly available information is from various blog posts. There are other firms, too, like Carbon Networks vying to be one of the early leaders. 

This market has already begun to attract attention from other enterprise apps vendors and providers of environmental, health, and safety software. Look for lots of alliance announcements over the next year.

Hara's early customers include The Coca-Cola Company and the city of Palo Alto. In today's edition of The New York Times Coca-Cola is cited as using Hara's software to track greenhouse emissions at 1,000 sites around the globe. 

In newly emerging software categories, it's critical for vendors to stay tightly focused, especially around a handful of core verticals. I think the early adopters will be drawn from consumer-facing verticals, including food and beverage, packaged goods, electronics, energy companies, and public sector. If I were advising Mr. Chatterjee, I would urge him to pick one or two verticals and build critical mass. 

At this point, the market is still in the early cocoon stage. Based on history, it seems less likely that a homogenous market will emerge where every enterprise needs the same functionality. Instead, as the market develops, each vertical is likely to have its own unique requirements that will likely be based on its usage and demand for specific resources.

Microsoft xRM: The Next Platform as a Service (PaaS)?

Microsoft社が自社のCRM戦略の一環として、xRMと呼ばれるPaaSの事業を開始する、との事。 
 
これはForce.comと直接競合する事業となり、今年の後半には提供開始される模様。

If you're a Microsoft partner, you likely know about the software giant's Business Productivity Online Suite (BPOS) and hosted Dynamics CRM plans. But Microsoft's software as a service (SaaS) strategy doesn't end there. The company also is working on a platform as a service (PaaS) called xRM. Here are some preliminary details.

xRM will be an "anything relationship management platform," according to Mary Jo Foley's All About Microsoft blog on ZDnet. xRM, formerly code-named Titan, is expected to debut later this year and compete with Salesforce.com's PaaS effort, dubbed Force.com.

But What Is PaaS?

Of course, definitions of PaaS vary from company to company. But Salesforce.com offers this explanation:

PaaS provides all the infrastructure needed to run applications over the Internet. It is delivered in the same way as a utility like electricity or water. Users simply "tap in" and take what they need without worrying about the complexity behind the scenes. And like a utility, PaaS is based on a metering or subscription model so users only pay for what they use…

…To develop software, you once had to buy databases, servers, networks, and a host of development tools. And then you needed the staff to install, optimize, and maintain it all. With PaaS, you can avoid those investments and focus on developing applications instead.

Translation: To me, PaaS almost sounds too good to be true. And for network-centric managed service providers (MSPs) that don't focus on application development, PaaS may not be much of an opportunity.

Still, there are thousands of Microsoft ISVs (independent software vendors) and consulting firms that are looking for the next big thing in Microsoft application development. PaaS — specifically Microsoft xRM — could be the answer to that search when it debuts later this year.

IBM's Cloud Gains Definition

IBMの発表したPrivate Cloud向けのサーバソリューション、WebSphere CloudBurst Applianceの私用と価格帯についての情報
 

What does the "Blue Cloud" look like? There have been times when IBM's vision for cloud computing seemed diffuse, largely because Big Blue has so many points of entry. IBM sells servers and software, builds data centers, and provides consulting services. The company also had to consider the issue of whether it might wind up competing with its customers. As a result, IBM's early efforts in the cloud didn't align neatly with the most visible examples of the genre, such as Amazon Web Services or Salesforce.com.

This week IBM is rolling out new products that begin to bring some definition to its cloud computing roadmap. IBM is offering several services enabling public cloud computing. But Big Blue's sharpest focus is on the private cloud, which presents an opportunity to sell hardware and software rather than monthly subscriptions.

Here's what IBM is announcing:

Public Cloud: IBM can run your application testbed in its public cloud today, and will soon offer a subscription service to host virtual desktops in its data centers. The IBM Smart Business Test Cloud Services taps into, while the upcoming IBM Smart Business Desktop Cloud will establish a beachhead for expected future growth in enterprise desktop virtualization as a service delivery strategy. 

FW: What's Next In SaaS?

Salesforce.comとTaleo社の好調な事業成長の状況報告

Our SaaS coverage continues. We've seen acquisitions starting with Intuit buying PayCycle. Let's see what's up with Salesforce.com, and what is SAP likely to acquire with its rather large warchest.

On May 21, Salesforce.com, Inc. (NYSE: CRM), a pioneer in the SaaS sector with annual revenue of $1.077 billion, reported its first quarter results that beat analyst expectations.

Q1 revenue grew 23%y-o-y and 5% q-o-q to $304.9 million while net income was $18.4 million or $0.15 per share. Analysts expected earnings of $0.11 on revenue of $304.73 million. Q4 analysis is available here.

Subscription and support revenues were up 25% y-o-y and 6% q-o-q to $281.8 million. Professional services and other revenues were up 4% y-o-y and down 1% q-o-q to $23.1 million, Salesforce.com added 3,900 customers in the quarter and 15,700 over the year. The total number of customers at the end of the quarter was 59,300.

Deferred revenue at the end of the quarter was $549 million, up 17% y-o-y but down 6% q-o-q. Cash from operations was up 17% y-o-y and 29% q-o-q to $98 million. It ended the quarter with no debt and total cash at $984 million, an increase of $101 million from Q4 and $233 million from the last year.

Salesforce.com says it is not clear when information technology spending patterns would return to normal levels. It has therefore reduced its full year revenue guidance of $1.3 to $1.33 billion by 4% to $1.25 to $1.27 billion. It is however undertaking strong cost control measures and increased earnings outlook from $0.54 to $0.55 to $0.59 to $0.60. Analysts expected revenue of $1.31 billion and EPS of $0.55.

For the second quarter, Salesforce.com expects revenue between $312 and $313 million and EPS between $0.14 and $0.15.

It is currently trading around $40 with market cap of about $5 billion. It hit a 52-week high of $45 on May 5. In my last post, I had said that Salesforce.com is likely to go on an acquisition spree, rolling up smaller companies like Apptus and VerticalResponse, which are built on the Force.com platform. Other SaaS companies it may consider within the CRM space are Lucidera and InsideView.

As I have said in my earlier posts, I would not like to see an interesting SaaS company like Salesforce.com being acquired by bigger companies like HP, SAP, Oracle, IBM, or even Microsoft. However, it is a possibility that cannot be ruled out. SAP recently announced that it has a budget of $7 billion for acquisitions, big enough for acquiring Salesforce.com. I have to say this, though, that Salesforce.com and SAP have completely incompatible cultures.

Chart for Salesforce.com (CRM)

On May 7, Taleo (NASDAQ: TLEO), the leading provider of on demand talent management solutions with annual revenue of $168.4 million reported its first quarter results. Q1 revenue grew 34% to $48.1 million driven by 37% growth in application revenue to $41.2 million. It generated $13.4 million in cash flow from operations. Net loss was $2.2 million or $0.07 per share, compared to net income of $0.6 million $0.02 per share last year. The loss was mainly due to $3.5 million amortization costs related to the Vurv acquisition, as well as increased costs related to its revenue review last quarter.

During the quarter, Taleo signed 166 new customers despite an increasingly volatile global economic climate. It added 19 new Taleo Enterprise Edition(TM) customers, compared to 27 in the last quarter. It also signed 147 new customers for Taleo Business Edition(TM), a recruiting solution targeted at small and medium sized businesses. My interview with its CEO Michael Gregoire is available here.

Taleo is currently trading around $18 with market cap of about $564 million. It hit a 52-week high of $19.29 on June 10. Taleo is a more likely acquisition for SAP, and would be generally easier to integrate.

Chart for Taleo Corp. (TLEO)

Salesforce.com Launches Free Force.com Edition

各社がCloud Computing事業に関して顧客の引き込み戦略を構築する中、Salesforce.com社は自社のアプリケーション開発環境、Force.comの無償バージョンを提供開始した。
 
一種のオープン化の動きの一つで、既に110,000もあるForce.comのアプリケーションにさらに拍車がかかる模様。 
 
 

Customer relationship management software as a service and web hosting provider Salesforce (www.salesforce.com) has released a free edition of its Force.com (www.force.com) cloud platform, letting companies build and run their first application or website on Force.com at no cost.

Offering the comprehensive capabilities of the Force.com platform, which debuted in November 2008, Force.com Free Edition provides everything companies need to build and run their first cloud computing app for free for up to 100 users, or their website for up to 250,000 page views per month.

Force.com is salesforce.com's enterprise cloud computing platform, which provides unlimited real-time customization, powerful analytics, real-time workflow and approvals, programmable cloud logic, integration, real-time mobile deployment, and a programmable user interface.

"Now more than ever, companies are asking how they can run everything in the cloud," said salesforce.com chairman and chief executive officer Marc Benioff. "Force.com Free Edition will enable every company to experience success with cloud computing. This will empower anyone to build and run their first cloud computing app for free on Force.com."

Customers and partners have already built more than 110,000 business applications on Force.com for such varied purposes as manufacturing, finance, and supply chain management; claims processing and order management; and brand management.

For developers, Force.com provides free online training and a library of sample applications, as well as a sandbox development environment to test their app or site before deploying it.

Force.com Free Edition is now available to US companies only. To upgrade to Force.com, pricing begins at $25 per user per month.

A report from Nucleus Research (www.nucleusresearch.com) has shown that building apps on Force.com's cloud five times faster and cheaper than on-premise alternatives.

Nucleus Research vice president Rebecca Wetteman said the free edition of Force.com will accelerate enterprise cloud computing adoption. "Customers have told us they get hooked on Force.com after building their first app. Using Force.com to build and deliver apps five times faster and at less cost is very attractive for IT departments."

Rackspace lays out its cloud computing roadmap: Think hybrid | Between the Lines | ZDNet.com

Rackspace社の事業成長が好調で、次世代の戦略としてホスティングサービスとCloud Computingサービスを連携させた事業展開を強化している。 
 
エンタプライズでのCloud Computingの浸透はまだ遅く、全体市場の5%程度、というForrester Researchの報告。  当面Hybridのアプローチが有効である、という同社の判断によるもの。
 
例えば、VMWareベースのオペレーションは専用のサーバでホスティングしても、WebアプリケーションはCloud上に移行してもいい、などの運用方法が提案されている。 
 
その他に、SQLも含めたWindows の開発環境をCLoud上で提供する計画。
ISVに対して、APIの公開を予定している。
ファイルバックアップ、ストレージ、共有などのサービスも開始予定。(これはJungle Disk社の買収によるもの)
 
 

Rackspace's Lew Moorman on Wednesday will outline the cloud computing roadmap to customers in San Francisco. The gist: Rackspace will push a hybrid approach that tightly couples dedicated and cloud computing resources. The theory: Customers will begin toggling between hosted data centers and cloud computing resources as standard operating procedure.

In an interview at CBS Interactive offices in San Francisco, Moorman, President of the Cloud Business and Chief Strategy Officer at Rackspace,  said that companies will increasingly meld outsourcing, hosting and cloud computing approaches as they build their IT infrastructure.

Rackspace's roadmap works well with its existing businesses—a data center hosting unit, cloud computing services and a software as a service email offering. And according to Forrester Research, only 5 percent of large enterprises have either implemented cloud computing or plan to in the next 12 months. Simply put, a hybrid approach is likely to be the norm for a while.

Moorman (right) acknowledges the reality. A hybrid hosting-cloud computing model appeals to "dabblers," he said. This hybrid approach will put cloud computing servers behind the firewall and tightly networked with Rackspace's dedicated hosting services. The raw materials are there today, but Rackspace needs to build the network technology to connect them.

When finished, Moorman sees customers mixing and matching technologies with the cloud. For instance, VMware and a company's database may live on a dedicated server, but Web apps and archiving might be offloaded to the cloud. The key is to make these platforms operate seamlessly. "When finished it (this hybrid approach) will automatically link your private network to cloud securely so it looks like one seamless network," said Moorman. Simply put, some applications like enterprise resource planning software, are likely to stay on dedicated computing resources.

The timing of the hybrid service launch is murky. Moorman said the timing it to be determined and Rackspace plans on building its own networking tools for its hybrid hosting/cloud service as well as using software from other vendors.

Among the other items on Rackspace's roadmap:

  • A Windows cloud computing platform. Moorman said Rackspace has been actively working with Microsoft to create "a fully supported Windows cloud service." Rackspace plans to host SQL and other Windows software and deliver it via the cloud. The timing on this effort is also to be determined, but Rackspace is hoping a Windows service with full support will differentiate the company from Amazon's EC2. I asked Moorman how his plan fits in with Microsoft's Azure effort and he said that they are competitive. However, Azure is more of software platform than Windows servers. Moorman acknowledged that "there's more tension" with Microsoft given that Azure may compete with Rackspace, but the two services will be different. "I view them as a competitor," said Moorman. "But I can't ignore the Windows community."
  • Rackspace will release its cloud server API to partners this month and the public next month. Moorman said the game plan is to allow third parties build on Rackspace's cloud platform.
  • And finally, Rackspace is looking at offering cloud file storage services such as advanced archiving, business file storage, sharing and backup. Rackspace is basically using the technology from Jungle Disk, a company it acquired in October.

The GigaOM Interview: Kristof Kloeckner, CTO of IBM Cloud Computing

IBM Cloud Computing事業のCTO、Kristof Kloeckner氏とのインタビュー
 
自社のCloud Computing戦略は、ワークロード単位でPublic Cloud(Blue Cloud) /Private Cloud(Cloud Burst)で運用することを提言している。 
 
ワークロードとは、企業内のソフトウェア開発運用のライフサイクルのステップの単位で、Analytics、Collaboration、Test/Development、などの分類がある。

The GigaOM Interview: Kristof Kloeckner, CTO of IBM Cloud Computing

Kristof KloecknerIBM's first true cloud computing products, announced today, consists of workload-specific clouds that can be run by an enterprise on special-purpose IBM gear, Big Blue building that same cloud on its special-purpose gear running inside a firewall, or running the workload on IBM's hosted cloud. The offering seems like a crippled compromise between the scalability and flexibility that true computing clouds offers and what enterprises seem to be demanding when it comes to controlling their own infrastructure. I spoke today with the chief technology officer of IBM's cloud computing division, Kristof Kloeckner, to learn more. Below is an edited account of our talk.

GigaOM: Let's start with the hardware underlying IBM's CloudBurst offering. How does this compare with what Cisco is doing or other cloud hardware out there?

Kloeckner:  This first instance for test and development workloads is built on Intel-based blades, but we anticipate other workloads might run on different platforms. We are actually working with the mainframe team for particular workloads. We have a prototype running that has p-series and z elements for SAP workloads.

GigaOM: So in IBM's view the workloads dictate the hardware, rather than the idea of commodity servers being used to build out a general purpose cloud?

Kloeckner: We make the hardware selections based on the workloads you want to run, and we optimize the workload for you. But because it is in the cloud, in terms of what do you see as a client as to how each different cloud behaves, it's all entirely consistent.

GigaOM: Why focus on workload-specific clouds?

Kloeckner: One should really instantiate clouds with the workloads that you run on them in mind. Depending on what the delivery needs are you might have an analytics cloud separate from your collaboration cloud, and you might also decide you want to keep the test and development cloud in-house, and then expand into the public cloud for collaboration services.

GigaOM: Why focus on test and development clouds for your first products?

Kloeckner: When we looked at development and test it's considered so crucial for accelerating the business value of IT, and we think that making dev and test more efficient and accelerating the process through automation was extremely attractive. About 30-50 percent of our client's resources are devoted to dev and test. It's also part of the infrastructure that's not well managed. For example, after new apps are tested, in some cases the department doesn't want to give up access to those resources because it may take a long time to get them back. Making test and dev dynamic can be instantly attractive.

GigaOM: Is it also a focus because other enterprises are already using public clouds like Amazon's EC2 for those workloads?

Kloeckner: The general practice of dev and test is to have it in-house. There is no massive trend by organizations to bring that out into the public cloud infrastructure. We see individual organizations try it out, but enterprise development is mainly in-house today. And this is the first of a whole series of offerings to come. We're going to look at analytics and business apps in the future, but we started with dev and test.

GigaOM: If the vision of workload-specific clouds proliferates, how do enterprises work across different clouds? Does IBM have a solution for that?

Kloeckner: We demonstrated some early solutions with Juniper's switching technology back in February and use our job scheduling software to schedule across domains. We have our efforts on the Open Cloud Manifesto, and have had public demonstrations extending our service management software so it can manage workloads in a variety of clouds. We do not have a packaged solution yet, but we can work with clients to extend across multiples clouds.

ibmslide

2009年6月18日木曜日

Intuit makes two-pronged PaaS and SaaS push

Intuit社が自社のPaaS事業を強化しSalesforce.comに近いアプリケーション環境を提供を開始した。
 
Intuit社はQuickBooksの広いユーザ層を始め、400万に及ぶ企業や2500万のユーザ層を抱え、それらに対してアプリケーションを提供できる方法として多くのForce.comアプリケーションベンダーが移植を行っている。 
 
代表的なのは、
DimDim: Web上でコンファレンスを開催できる環境
Expenseware:  出張旅費清算ソリューション
Rypple:  システム性能測定ソリューション
Setster:  オンラインでのアポイント管理システム
Vertical Response:  emailマーケティングソリューション。SF.comでも事業が成功
 
Intuit Partner Platformと呼ばれるこのPaaSプラットホームは、PayCycleという企業を$170Mの金額で買収したことから実現したサービス事業で、これらのアプリケーションに対して、SSO、統合化課金システム、アプリケーション間のデータ交換を保証する共通API、等が提供される。


Intuit makes two-pronged PaaS and SaaS push

Hard on the heels of its $170 million acquisition of SaaS vendor PayCycle (which I hope to post some further commentary on later today), Intuit is also announcing today an extension of its Intuit Partner Platform — a platform-as-a-service offering first launched a year ago — to support third-party development platforms [disclosure: I'm hosting a sponsored webcast later today with Intuit's Alex Barnett about application development in the cloud]. Read further coverage on Techmeme, AccMan, CloudAve.

The significant element of Intuit's PaaS announcement is that it is a land-grab to capture mindshare among developers on other cloud platforms, who can take their AppEngine, Amazon Web Services or self-hosted applications and make them available using Intuit's single sign-on, billing and QuickBooks integration infrastructure. Market reach being one of the key attributes developers look for in a new platform, perhaps the most appealing factor is that applications will be showcased within the Intuit Marketplace, with a potential reach to the four-million-strong installed base of QuickBooks accounting software customers and their estimated 25 million employees.

One of the five partners who are live on this new Federated Applications option at launch is Vertical Response, which was one of the first companies to be successful on Salesforce.com's AppExchange partner directory. The company grew rapidly by piggy-backing on Salesforce.com's existing market reach, offering an email marketing add-on that was a natural and simple extension to the core sales automation application. Vertical Response played an important role in helping to validate Salesforce.com's ecosystem strategy and it is an emblematic partner for Intuit's ambitions to build on its QuickBooks franchise in the same way.

Of the five partners announced at launch — web conferencing app DimDim, travel and expense manager Expenseware, performance management app Rypple, online appointments manager Setster and Vertical Response — only Expenseware is using Intuit's own QuickBase cloud database as its application platform. In a demonstration of Intuit's intent to draw in a diverse mix of developers, the rest are hosted on a variety of third party or self-hosted platforms, with .NET, Java and Ruby all represented.

What Intuit brings to the party when developers federate such applications to its platform is all the 'middleware' of cloud service delivery — single sign-on, consolidated billing, a consistent API for exchanging data between applications — along with the reach and trust of its established brand name. In that sense it is a mirror-image of Force.com and other cloud platforms out there, which have focused on what may turn out to be an outmoded notion of lock-in to a technology platform. Instead, Intuit's lock-in is to the service delivery infrastructure, irrespective of the underlying technology platform on which the applications themselves execute.

As a result, customers can easily swap between applications should they wish to. "You can switch to another application instantly, giving your customer complete control over how they want to automate their business" said Alex Chriss, business leader of the Intuit Partner Platform, in a briefing earlier this week. Intuit charges customers nothing to sign up to the Intuit Workplace, which is its name for the framework within which the federated applications are delivered. This adds a single bar across the top of the browser window, which handles the single-sign on and other account management services. The user then opens new applications in new browser tabs or windows, but without having to sign in again to each one individually. Its backend cloud infrastructure takes care of data integration between the applications and to Intuit Quickbooks on the desktop, which, if installed, is the preferred system of record.

Intuit's fee to developers is a revenue share on subscriptions that varies from 14-20 percent, depending on volume, plus a utility fee for platform usage if the application runs on the Intuit platform rather than a third-party resource.

Of course it is still lock-in, but it's a different kind of lock-in from what we've seen before, leaving developers free to host elsewhere and connect into rival PaaS ecosystems. Customers are more locked-in, but if they carefully choose add-ons that are also available elsewhere then it's probably easier to move off Intuit than another more closed platform. And when I asked whether it might be possible for another small business accounting application to offer itself on the Intuit platform, the team didn't demur. So you could in theory envisage someone offering a 'graduate from QuickBooks' offering within the Intuit ecosystem, even though I can't imagine it would be welcomed with open arms.

Cloud Futures Pt. 3: Focused Clouds

Cloud Computing市場のベンダーをビジネスモデルによってい分類を試みた手法。
 
Horizontall Focused Cloud:
市場全般的に適用可能なソリューションを提供するモデル。
SkyTap: QA部門に特化したサービスを提供
Terremark: Enterprise向けのソリューション
EngineYard: Ruby-OnーRailsの開発コミュニティ(主として小規模なISV)に特化した開発環境を提供
 
Verticaly Focused Cloud
特定の業界にソリューションを提供
Athenahealth: 医療業界に特化したモデル
BancServ:  PCI-DSS準拠の決済ソリューションを金融業界に提供


Cloud Futures Pt. 3: Focused Clouds

Happiness in Business

If you can't be 'best' or 'cheapest', that only leaves being 'first' (see Pt. 1: Service Clouds and Pt. 2: Commodity Clouds).  Since Amazon Web Services (AWS) clinched the 'first' and 'best' titles for the general marketplace, your best bet is to pick a subset of the market to focus on.  Focused clouds find a sweet spot and exploit it.  This is really Business 101 for Startups.  A diagram I saw recently by Ben Caddell brought this into focus and provides a very simple to understand reminder for those of us who may have forgotten (see right).

Let's look at some of today's focused clouds.  I'll mostly talk to Infrastructure-as-a-Service (IaaS), but also touch on Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS) briefly.

Horizontally-Focused Clouds
By 'horizontal', people usually mean a longitudinal slice of the general market focusing on either a stakeholder (e.g. QA, IT, business management) or a business size (e.g. large enterprise, small/medium enterprise (SME), small/medium business (SMB), startups, or individuals).  A horizontal focus, by definition crosses multiple verticals  (e.g. financial services, health, etc. — see below).

We have some interesting examples of these available to us today.  I've picked just three to highlight my point: SkyTap, Terremark, and EngineYard.

SkyTap
Perhaps my personal favorite is SkyTap.  SkyTap focuses tightly on providing a unique experience for those in Quality Assurance (QA).  They allow a rich workflow experience that greatly facilitates deploying and saving the state of multi-server applications.  A QA person can find a bug that affects multiple servers in a complex application and literally save the entire system for reuse or re-play by the affected developer at any time.  Combined with easy replication of multi-server environments and other great features designed for this segment only, SkyTap, even though technically an Infrastructure-as-a-Service (IaaS) play is generally under the radar when folks talk about infrastructure clouds.

Terremark
A relatively new entrant into IaaS, Terremark is making it's mark by focusing on the enterprise.  In fact, their offering is called simply The Enterprise Cloud, showing where they plan to focus.  Terremark uses VMware, which hasn't had a lot of traction in the public clouds to date.  Presumably this is because they plan to offer some of the more advanced enterprise-class VMware features like HA and DRS.  From my sources at VMware I've heard that the Terremark cloud product is quite good and they have developed quite a bit of secret sauce on top of VMware. [1]

Regardless, by picking an area of the market that has been under-served by the heavyweights I think they have a good opportunity.

EngineYard
It's quite a bit easier, as you move from Infrastructure to Platforms and Software to differentiate and focus on a particular target market.  EngineYard (and their close cousin Heroku who I have mentioned before) focuses on providing a fully managed and automated Ruby-on-Rails (RoR) stack to web startups.  This has already distinguished them amongst the platform crowd and allowed them to ramp up a very respectable business in less than 2 years time.

Vertically-Focused Clouds
If you can't go horizontal, go vertical.  A vertical focus is an industry focus, be it financial services, health, construction, high-tech, life sciences, energy, or other.  A vertical focus tends to be more solutions-oriented.  When you put together a package that focuses on a single industry it is rarely transferrable, without major changes, to another industry.  However, this kind of focus can be very beneficial for a smaller cloud trying to make a mark early.  This also means it can be rather hard to build a vertical infrastructure cloud.  An example might be someone building a cloud that was highly secure and HIPAA compliant for the medical industry.  Or one that focused on PCI compliance for financial services companies.

Outside of infrastructure, many Software-as-a-Service (SaaS) businesses focus tightly on a given industry.  I don't know of any current IaaS clouds who are vertically focused and the list of SaaS providers who are vertically focused is too long to list.  A couple of brief examples:

athenahealth
athenahealth provides doctor and patient management services online.

BankServ
BankServ provides online payment processing specifically for financial institutions.

Focus, Focus, Focus

As you can see, if the general market already has dominant players who are 'first', 'best', and 'cheapest', then picking a subset of the market that is not currently served and being 'first' there is a great strategy for any new cloud.  In the final part of this series I'll talk about the particular importance of focus for those players currently in the general market who need to compete on value, not price to survive.  Ultimately, the best way to make money is to help your customers.  Don't help them on price.  Provide value instead.

Is Amazon Going to Open Source its Web Services and Cloud APIs?

Cloud Computing業界のオープンソース化の動きが出ている。  下記が代表的な会社。

Eucalyptus:  UC Santa Barbaraで始まったプロジェクトが今は$5Mの投資を受けて、事業会社としてAmazon EC2準拠のオープンソース事業を開始

Open Cloud Platform(Sun Microsystems)

Joyent

Reservior

Enomalism

10Gen

 

この動きに対応してか、Amazonも自社のSC3、EC2等のAPIを公開し、オープンソース化の動きを見せるのでは、という噂が広がっている。


Although it's only a rumor, Reuven Cohen reports hearing from more than one source that Amazon intends to open source its (AWS) Web Services APIs. "Word is Amazon's legal team is currently 'investigating' open sourcing their various web services API's including EC2, S3, etc," he writes. Cohen argues that the move would make a lot of sense, and I agree. Although Amazon's APIs are, as Cohen writes, "the de facto standards" in cloud computing, Amazon faces significant threats from open source cloud computing efforts if it pursues a purely proprietary path.

Last summer OStatic broke the news about Eucalyptus, an open source infrastructure for cloud computing on clusters that duplicates the functionality of Amazon's EC2, using the Amazon command-line tools directly. More recently, Eucalyptus Systems launched as a commercial open source company providing support and services for Eucalyptus, with more than $5 million in venture funding. Sun Microsystems has also paved an open source cloud computing path with its Open Cloud Platform. Joyent, Reservoir, Enomalism and 10Gen are just a few of the other players with significant open source cloud computing efforts in place.

Of the various open source cloud computing efforts, Eucalyptus is particularly notable because it's a software infrastructure for cloud computing that mimics Amazon's tools directly--like a clone. In a recent discussion we had with Eucalyptus co-founder Rich Wolski, we asked him what companies are doing with Eucalyptus, and he said:

 

"They're doing a variety of things, but a lot of them are basically interested in Eucalyptus for doing the same kinds of things they're doing in Amazon AWS, such as business logic applications, where part of the attraction of Eucalyptus is that they can use it as a platform for seamlessly running their public cloud applications and their on-premise cloud apps."

 

Eucalyptus provides pronounced cost advantages over Amazon's cloud services, with the software itself being completely free and open source. The trend toward cloud computing remains strong. In a recent story we did reporting survey results from IT managers, they said that software-as-a-service applications are having a more disruptive impact on the commercial open source arena than any other trend.

Amazon can't ignore the cost advantages and diversity of product offerings that open source players are already offering in the cloud computing space. The company's best move is to open source its tools, which will end up diversifying them, play on a level field in terms of cost with the open source alternatives, and charge for services. Absent these moves, the company will lose potential customers to free, open source alternatives. 

2009年6月16日火曜日

The Case for Private Clouds

Private Cloudの優位点をユーザの視点から非常によく説明している記事。

If you've been reading this series, you now have a better understanding of the much-discussed term "private cloud." In the previous two parts of this series, I described the features and service capabilities of private clouds. In particular, I noted that the move to private cloud computing requires the separation of infrastructure provisioning and business application resource consumption. In essence, a private cloud requires that resource requests and provisioning must interact as service requests and responses in an automated environment, avoiding any manual intervention.

Now I want to focus on benefits and challenges of a private cloud implementation. This week, I will discuss the "pros" of private clouds; next week, I will turn to the "cons."

Obviously, there's a lot of excitement about cloud computing. Many organizations are considering private clouds as the primary method to achieve cloud computing benefits. As an aside, private clouds are also sometimes known as internal, although a strong case can be made that the benefits of an internal cloud can also be obtained through a cloud provider, akin to a hosting service.

Since I've covered the overall benefits of cloud computing before, I won't repeat them here. There are many of them, which is why so many IT organizations are interested in the topic. Assuming you want to implement cloud computing and achieve those benefits, why does a private cloud make sense? The main advantages:

1. A private cloud leverages existing infrastructure

With some incremental investment, a company's existing data center can be made cloud-capable. Almost every organization has large amounts of installed equipment, much of it of recent vintage. Many of these organizations also have recently gone through significant data center upgrades or expansions. Turning to external cloud providers would require scrapping the installed base of equipment, necessitating a write-off—no music to the CFO's ears.

Instead, existing infrastructure can be used as the foundation of a new cloud computing capability. This is smart, finance-wise—and IT-wise. The CIO avoids presenting senior management with a message of "You know all that investment we made over the past two years? Well, the latest thing in IT, cloud computing, requires us to trash it and start over with someone else's infrastructure." Instead, he or she can say "You know all of that investment we made recently? I know you'll be glad to hear that it will help us move to the next level of IT support of our business goals—quick response to computing demands and easy scalability to meet changing business conditions." I don't know about you, but I'd much rather have that second type of conversation!

Most of the major vendors are coming out with add-on bits of kit that can be integrated into existing IT infrastructures to support automated provisioning and dynamic reassignment of resources. With some amount of incremental investment, a data center can be moved from efficient (high utilization) to agile (quick flexibility in face of changing demand profiles).

2. IT has no profit motive

External cloud providers have to turn a profit—and that profit comes from margin tacked onto basic costs. Internal IT, by contrast, focuses on providing efficient service in a cost-effective manner. By definition, running as a cost center bypasses the margin (i.e., cost) associated with profit. A private cloud offers the opportunity to achieve agility at a low price.

A more troubling prospect is the fact that so many IT organizations have been burned by previous outsourcing arrangements. You think the arrangements will be so much less expensive, but then it turns into a nightmare of change orders (more money), poor responsiveness (outsource provider cuts cost-heavy services to the bone), and lousy service (the carefully crafted SLA becomes a "target, not a commitment."). Internal IT is dedicated to one thing: business unit satisfaction. So keeping things inside and avoiding the need to turn a profit allows the overall company to benefit from cloud computing at the lowest possible price.

3. IT knows your business

Working with business groups builds IT tacit knowledge, which means IT has a rich context of understanding subtle elements of the way the overall business operates. Keeping the cloud private enables business units to harvest that tacit knowledge to get better systems and to increase end customer satisfaction. Marrying cloud computing with internal IT marries the best of both worlds. Furthermore, keeping IT functionality within the company enables personnel transfers back and forth between IT and business units, further enriching tacit knowledge. By contrast, arms-length service providers don't really understand your business, no matter what they say—and anyway, with lots and lots of customers, the attention external providers offer is split. Which brings us to the next "pro" for private clouds.

4. Ability to react more quickly to changing business conditions

Let's say something big changes in your business—not an individual application needing 500% more compute resources; after all, that's a problem cloud computing is supposed to solve. No, something really big—say your company buys another that is nearly as large. You need a ton of work to get ready.

An external cloud provider is going to work to contract, whereas people associated with a private cloud are more loyal to the company and will move heaven and earth to support the extra work. If the crunch comes, who would you rather rely on: an internal group, or an external provider?

5. An SLA that means something

I already noted that too many outsourcer SLAs are worth exactly the paper they're written on, which is to say, very, very little. External cloud providers offer no or restricted SLAs. Internal groups offer SLAs and, if the SLA isn't met, you have some influence over the group—you can always threaten to fire the CIO. By contrast, if an external cloud provider falls short on the SLA, you'll get a sympathetic meeting and an offer of a cut-price refund.

6. Privacy

Data privacy is a nightmare. Companies face large numbers of complex, poorly understood, and inconsistently enforced data privacy laws and regulations. Putting external cloud computing into the mix threatens to take a challenging situation and make it even more challenging. Many companies, when faced with adding additional complexity to existing privacy requirements, will punt on using an external cloud provider. Keeping the cloud private bypasses any potential problems posed by external cloud providers and makes cloud computing easier to accomplish.

7. IT staff motivation

Nothing is more demoralizing than watching your employer outsource some juicy new technology while asking you to patch a decrepit old application long past its prime. And for sure, if new cloud initiatives are placed with external providers, employees will quickly see that the way to gain new cutting-edge skills is to go to work for a cloud computing company. Keeping your cloud initiatives in-house will raise overall employee satisfaction, since it shows that long-term career growth is possible.

8. Incremental change

Rather than transforming the way IT is done in an instant, moving to a private cloud eases the transition. Putting private cloud computing in place enables the IT organization to begin reaping the benefits of the cloud without overturning every existing process. It's better to take a number of small steps rather than stumble trying to take a giant leap.

It's easy to recognize that the question about whether to do cloud computing is easy —it's a big yes. However, the decision about whether to implement a private cloud or use a public cloud is much more difficult. Many factors play into the decision. In this piece I've outlined some of the strongest reasons why companies should consider whether a private cloud makes more sense as an initial way to get started with the cloud. Next week, I'll look at the other side of the coin: why moving to a public cloud makes more sense—in the short and the long run.

Bernard Golden is CEO of consulting firm HyperStratus, which specializes in virtualization, cloud computing and related issues. He is also the author of "Virtualization for Dummies," the best-selling book on virtualization to date.

Cloud Computing Seminars HyperStratus is offering three one-day seminars. The topics are:
1. Cloud fundamentals: key technologies, market landscape, adoption drivers, benefits and risks, creating an action plan
2. Cloud applications: selecting cloud-appropriate applications, application architectures, lifecycle management, hands-on exercises
3. Cloud deployment: private vs. public options, creating a private cloud, key technologies, system management
The seminars can be delivered individually or in combination. For more information, see http://www.hyperstratus.com/pages/training.htm