2009年2月27日金曜日

Azure Service Plaform Videos

Microsoft Azureに関するビデオが紹介されている。 これはMicrosoft社が提供しているもの

Azure Service Plaform Videos

image

As some of you had left comment in my blog post on "Azure" wanting additional  tutorial and here's the first of a comprehensive set of HDI's ("How Do I …?") on the entire Azure Services Platform.

Here are some of the HDI's you will find there:

Additional here is a a great series (ongoing) on cloud computing and related concepts, and the Microsoft platform and strategies by David Chou, an Architect Evangelist at Microsoft (friend and colleague)::

 

http://rajramabadran.wordpress.com/2009/02/18/azure-service-plaform-videos/

Lessons From The Demise Of A Cloud Startup - Plug Into The Cloud

Cloud Computingベンダーが事業に失敗し業務を停止した際の対策をCogheadを事例に開設した記事。 
Cogheadは最終的にSAPに買収される形になったが、Cogheadの事業は停止されることが決定し、エンドユーザはそれぞれのアップロードしたデータやアプリケーションを別の環境(他のPaaSベンダーもしくは自社内のシステム)に移行する必要がある、と連絡を受けている。 
CogheadはAdobe Flexをベースとした独自のプラットホームで、他のプラットホームへの移行は非常に難しい、と考えら、エンドユーザは非常に困惑している状況、との事。
 
同様の問題が過去にも起きている。
  • Streamload社の子会社であるMediaMax社(B2C事業)が昨年8月に事業停止をした際に、同様にStreamload社の子会社であるNirvanix社(B2B事業)があたかも顧客データを継承したかのような風評が流れ、事実と異なる情報に市場が混乱した。
  • CRM事業のEntellium社が破産し、資産がIntuit社に買収されている。
Cloud Computing戦略を導入する際に、当該ベンダーの信頼性を調査すると共に、万が一移行する際の工数についても調査しておく必要性がある、と指摘している。
 

Amid the growing interest in cloud computing, Coghead's collapse provides a reality check. SAP is providing a safety net for Coghead's intellectual property and its employees, but Coghead's customers are left to fend for themselves.

As more developers and IT pros contemplate a move into the cloud, vendor failure is one of the risks they face. Coghead issued a letter to customers earlier this week advising them that it had terminated operations and, effective immediately, was discontinuing license agreements. Coghead is giving customers until April 30 to move their data and applications off its platform.

Coghead seemed to have everything going for it -- venture funding, a few years of experience, thousands of users, and a platform-as-a-service that seemed well suited for IT departments looking to lower costs.

What went wrong? In a second letter to customers -- one in which he revealed that SAP was acquiring the company's assets -- Coghead CEO Paul McNamara blamed the economy. "Faced with the most difficult economy in memory and a challenging fund-raising climate, we determined that the SAP deal was the best way forward for the company," McNamara wrote in a note that went out yesterday.

Coghead customers now have to scurry to relocate their hosted applications. They must move apps developed on Coghead's Adobe Flex-based platform to another PaaS provider or bring them in-house, underscoring the pitfalls of proprietary cloud services. "Customers can take the XML out that describes their application, but the reality is that it only runs on Coghead, so customers will need to rewrite their app with something different," McNamara said.

Coghead isn't the first cloud startup to fail, and it won't be the last. Storage-as-a-service vendor Nirvanix was forced to do damage control after MediaMax -- also known as The Linkup and spun off from Nirvanix's parent company, Streamload -- closed last August, leaving some customers without access to their data. And CRM-as-a-service vendor Entellium was shut down, and its assets were acquired by Intuit, after its CEO and CFO were charged with fraud last fall.

It doesn't help that Coghead and SAP did a poor job of communicating Coghead's abrupt end, sending two messages, a day apart, leaving customers in limbo about the company's acquisition by SAP. Even now as I write this (on Feb. 20), there's no mention on Coghead's Web site that it's ceasing operations, other than some fine print in its license agreement.

So Coghead serves as another example of what can go wrong. IT pros tend to be cautious of doing business with startups, and cloud startups carry the added worry that your data resides on their servers, outside of your reach. Thus, it's important for prospective cloud users do two things: one, exercise due diligence by checking customer references, meeting company management, asking about funding, and so on; and two, have an exit strategy in case your cloud vendor, like Coghead, closes its doors.

2009年2月26日木曜日

Zmanda’s 3.0 backup supports cloud, Sharepoint, PostgreSQL, EnterpriseDB

Zmanda社はオープンソースを使って、バックアップソリューションを提供しており、来月、Microsoft Sharepoint 2007、PostgreSQL、Enteprise DB Postgres PlusのデータバックアップをSaaSモデルで提供する、と発表。
 
ディスク、テープ、そしてS3のようなCloud Storageを自由に組み合わせてデータバックアップを出来るところが特徴


Zmanda's 3.0 backup supports cloud, Sharepoint, PostgreSQL, EnterpriseDB

Zmanda expects to release in one month a major upgrade of its open source backup solution that enables disastery recovery to the cloud and new support for Microsoft SharePoint 2007, PostgreSQL, and Enterprise DB Postgres Plus.

Those are two of many key new features of Amanda Enterprise 3.0, the Sunnyvale, Calif. company announced today.

The new release will offer enterprise-level benefits for the modern data center for a fraction of the price of proprietary products, according to the company, which positions its products to small business and medium sized businesses. 

Customers will be able to backup and recovery data from disks, tape, optical devices or clouds such as Amazon S3. In addition, the enhanced version offers support for disk-to-disk-to-tape and disk-to-disk-to-cloud backup capabilities so that IT managers can do a quick backup to disk and and then vault another backup to tape or the cloud on their schedule.

AE 3.0 offers new media and device management features and location directives that enable customers to select geographically what and where to store backup data on the cloud. All of this gives  administrators flexibility in determing how data and applications will be stored, the company also said.

It will be available in roughly 30 days at Zmanda's online store at a starting price of $100 per client and $300 per application agent.  AE 3.0 is in beta testing.

Booz|Allen|Hamilton & Dataline Sponsor 2nd Government Cloud Computing Survey

この調査活動を通して、政府機関で採用された、いくつかのCloud Computingアプリケーションが紹介されており、非常に興味深い
 

Dataline, Booz|Allen|Hamilton and the Government Cloud Computing Community have teamed together to sponsor the 2nd Government Cloud Computing Survey. Cloud Computing has come a long way since the first survey six months ago, so we are once again asking for your thoughts on this exciting new approach to technology. The rare examples of just a few months ago have turned into a large number of exciting cloud-based deployments including:
  • GovDelivery for mass email and wireless notices to the public (50+ governmental organizations across 25 states and 13 federal departments) 
  • DC Government use of Google Apps for their 38,000+ employees for email, document collaboration, intranet, and calendars.
  • Census and Human Genome project data stored in the Amazon cloud
  • Acquisition solutions at 65 separate government agencies
  • Financial tracking at the State Department using Salesforce.com
Please give us your views, thoughts and plans through the survey at http://www.dataline.com/survey.html   . Results will be made available through this blog and the SOA-Ring and Government Cloud Computing Community wikis.

The Case Against Cloud Computing, Part Three

4) SLA(Service Level Agreement)の欠如
 
SLAをCloud Computingサービスの稼動率を保証する契約書であるように解釈するのは誤っている、と指摘している。
SLAの本質的な目的は、障害発生時にサービスプロバイダーがどういう代替手段で損失を補填するか、という事を明確にする事である、その金額も基本的にサービスFeeを超える事がない、という事を認識する必要がある、と指摘。
 
次のような取り組みが大事である、とのこと
  • Cloud Computingの稼働率が大きな課題になるが、企業内のIT環境においても、同様な稼働率を維持するためのコストを評価し(特に人員コスト)、それと比較した上でCloud Computingの費対効果を評価ウする必要がある。
  • 企業内のアプリケーションでMission Criticalなアプリケーションと必ずしもそうでないものを明確に整理し、Cloud Computingに移行できるものを選択する、という事が重要。 特に、Mission Criticalではないにも関わらず、運用コストが非常に高いアプリケーションなどは要注意
 
 

In parts 1 and 2 of this series, I discussed two common objections to cloud computing: difficulty of application migration and heightened risk. In this posting, I want to address another common objection to cloud computing, the one that has to do with service-level agreements. I call it:

SLA: MIA

One of the most common concerns regarding cloud computing is the potential for downtime—time the system isn't available for use. This is a critical issue for line-of-business apps, since every minute of downtime is a minute that some important business function can't be performed. Key business apps include taking orders, interacting with customers, managing work processes, and so on. Certainly ERP systems would fall into this category, as would vertical applications for many industries; for example, computerized machining software for a manufacturing firm, or software monitoring sensors in industries like oil and gas, power plants, and so on.

Faced with the importance of application availability, many respond to the potential use of cloud-based applications with caution or even horror. This concern is further exacerbated by the fact that some cloud providers don't offer SLAs and some offer inadequate SLAs (in terms of guaranteed uptime.)

Underlying all of these expressed concerns is the suspicion that one cannot really trust cloud providers to keep systems up and running; one might almost call it a limbic apprehension at depending upon an outside organization for key business continuity. And, to be fair, cloud providers have suffered outages. Salesforce endured several in recent years, and Amazon also has had one or two not so long ago.

Put this way, it's understandable that organizations might describe the concern regarding this all-important meeting of critical business systems with cloud provider reliability as an SLA issue.

Is that the best way to comprehend the issue, or even to characterize it, though?

If one looks at the use of SLAs in other contexts, they are sometimes part of commitments within companies—when, say, the marketing department has IT implement a new system, IT guarantees a certain level of availability. More commonly, though, SLAs are part of outsource agreements, where a company selects an external provider like EDS to operate its IT systems.

And certainly, there's lots of attention on SLAs in that arena. A Google search on "outsource SLA" turns up pages of "best practices," institutes ready to assist in drafting contracts containing SLAs, advice articles on the key principles of SLAs—a panoply of assistance in creating air-tight SLA requirements. A Google search for "outsource SLA success," unfortunately turns up nary a link. So one might assume that an SLA doesn't necessarily assist in obtaining high quality uptime, but provides the basis for conflict negotiation when things don't go well—something like a pre-nuptial agreement.

So if the purpose of an SLA is more after-the-fact conflict resolution guidelines, the implication is that many of the situations "covered" by SLAs don't go very well; in other words, after all the best practices seminars, all the narrow-eyed negotiation (BTW, doesn't it seem incredibly wasteful that these things are negotiated on a one-off basis for every contract?), all the electrons have been sacrificed in articles about SLAs, they don't accomplish that much regarding the fundamental requirement: system availability. Why could that be?

First, the obvious problem I've just alluded to: the presence of an SLA doesn't necessarily change actual operations; it just provides a vehicle to argue over. The point is system uptime, not having a contract point to allow lawyers to fulfill their destiny.

Second, SLAs, in the end, don't typically protect organizations from what they're designed to: loss from system uptime. SLAs are usually limited to the cost of the hosting service itself, not the opportunity cost of the outage (i.e., the amount of money the user company lost or didn't make). So besides being ineffective, SLAs don't really have any teeth when it comes to financial penalty for the provider. I'll admit that for internal SLAs, the penalty might be job loss for the responsible managers, which is pretty emotionally significant, but the SLA definitely doesn't result in making the damaged party whole. After all, having the IT department pay the marketing department is just transferring money from one pocket to another.

Finally, the presence of an SLA incents the providing organization to behavior that meets the letter of the agreement, but may not meet the real needs of the user; moreover, the harder the negotiating went, the more likely the provider is to "work to rule," meaning fulfill the bare requirements of the agreement rather than solving the problem. There's nothing more irritating than coming to an outside service provider with a real need and having it dismissed as outside the scope of the agreement. Grrrr!

Given these—if not shortcomings, then challenges, shall we say—of SLAs, does that mean their absence or questionable quality for cloud computing providers means nothing?

No.

However, one should keep the service levels of cloud computing in perspective, with or without an SLA in place.

Remember, the objective is service availability, not a contractual commitment that is only loosely tied to the objective. So here are some recommendations:

One, look for an SLA, but remember it's a target to be aimed for, not an ultimatum that magically makes everything work. And keep in mind that cloud providers, just like all outsourcers, write their SLAs to minimize their financial exposure by limiting payment to cost of the lost service, not financial effect of the lost service.

Two, use an appropriate comparison yardstick. The issue isn't what cloud providers will put in writing, it's how will a cloud provider stacks up against the available alternatives. If you're using an outsourcer that consistently fails to meet its uptime commitments, surely it makes sense to try something new? And if the comparison is the external cloud provider versus your internal IT group, the same evaluation makes sense.

Third, remember that the quality of internal uptime is directly related to the sophistication of the IT organization. While large organizations can afford significant IT staffs and sophisticated data centers, much of the world limps by with underfunded data centers, poor automation, and shorthanded operations staffs. They run from emergency to emergency, and uptime is haphazard. For these kind of organizations, a cloud provider may be a significant improvement in quality of service.

Fourth, even if you're satisfied with the quality of your current uptime, examine what it costs you to achieve it. If you're using lots of manual intervention, people on call, staffing around the clock, you may be meeting uptime requirements very expensively. A comparison of uptime and cost between the cloud and internal efforts (or outsourced services) may be instructive. I spoke to a fellow from Google operations who noted that at the scale it operates, manual management is unthinkable; nothing goes into production until it's been fully automated. If you're getting uptime the old-fashioned way—plenty of elbow grease—it may be far better, economically speaking, to consider using the cloud.

Fifth, and a corollary to the last point, even if there are some apps that absolutely, positively have to be managed locally due to stringent uptime requirements, recognize that this does not cover the entirety of your application portfolio. Many applications do not impose such strict uptime requirements; managing them in the same management framework and carrying the same operations costs as the mission-critical apps is financially irresponsible. Examine your application portfolio, both current and future, and sort them according to hard uptime requirements. Evaluate whether some could be migrated to a lower-cost environment whose likely uptime capability will be acceptable—and then track your experience with those apps to get a feel for real-world outcomes regarding cloud uptime. That will give you the data to determine whether more critical apps can be moved to the cloud as well.

In a sense, the last recommendation is similar to the one in the "Risk" posting in this series. One of the recommendations in that posting is to evaluate your application portfolio according to its risk profile to identify those which can safely be migrated to a cloud infrastructure. This uptime assessment is another evaluation criteria to be applied in an assessment framework.

So "cloud SLA" is not an oxymoron; neither is it a reason to avoid experimenting and evaluating how cloud computing can help you perform IT operations more effectively.

The Case Against Cloud Computing, Part Two

2) 法律上、ビジネス上のリスク
 
下記のリスクが存在する。
  • SOX、HIPAA等、企業のデータ管理に関わる各業界の監査規約
  • 企業のもつデータを国外で管理する事に制限をかける国家的制限
  • Cloud Computingビジネスがそれぞれの企業の主たる事業でない事に対する懸念(例:Amazon Web ServicesはAmazon社の主事業でない、という理由で事業として成功しなければ切り捨てられる可能性がある、という指摘。  Google等は実際にそうやって自社の事業を廃止している経緯がある)
Amazon Web Serviceが主として上記の問題があまり深刻でないスタートアップやMission Criticalでないアプリケーションをターゲットとしていた一方、SAPは自社のEnterprise向けERPサービスと同じアプリケーションを企業向けにSaaS事業化した、という違いがそれぞれの事業の方向の違いの大きな原因になっている、と指摘している。  ビジネスモデルとしては、上記の問題が市場である程度コンセンサスを得るまでは待つ、というスタンスも考えられる。
 
Cloud Computingのデータ管理を主としたリスクの問題は広く語られているようで、人によってかなり解釈が異なる、という指摘もある。  この問題を整理して解決するためには、まず既存の企業内のシステムのデータ管理リスクがどのように扱われているのかをしっかり把握した上で、対象Cloud Computingベンダーの評価をする事が大事である、と指摘。  次のような項目を提案している。
  • 内部システムでどのようにリスク管理がされているのかを明確にする
  • 企業の中のリスク階層を明確にし、開発プロセスの一部とする
  • 対象となるCloud Computingベンダのリスク管理を明確に調査し、比較をする。
 

In the first part of this series on The Case Against Cloud Computing, I noted that in speaking with a number of people involved with cloud computing, they (rather paradoxically) discussed with great vigor all the barriers to enterprises adopting cloud computing. As a result, I thought it would be useful to discuss the list of issues they (collectively) raised and offer some thoughts about them, particularly with regard to the potential for mitigation. The first of the series addressed the issue that, today at least, it is not possible to do a straight migration of a typically-architected corporate application into any of the common cloud services—they all impose their own architecture.

In this posting, I'd like to discuss the second issue raised with regard to why enterprises are/will be reluctant to embrace cloud computing:

Cloud Computing Imposes Legal, Regulatory, and Business Risk

Most companies operate under risk constraints. For example, US publicly traded companies have SOX disclosure legal requirements regarding their financial statements. Depending upon the industry a company is in, there may be industry-specific laws and regulations. In healthcare, there are HIPAA constraints regarding privacy of data. There are other, more general requirements for data handling that require ability to track changes, establish audit trails of changes, etc., particularly in litigation circumstances. In other nations, customer data must be handled very carefully due to national privacy requirements. For example, certain European nations mandate that information must be kept within the borders of the nation; it is not acceptable to store it in another location, whether paper- or data-stored.

Turning to business risk, the issues are more related to operational control and certainty of policy adherence. Some companies would be very reluctant to have their ongoing operations out of their direct control, so they may insist on running their applications on their own servers located within their own data center (this issue is not cloud-specific—it is often raised regarding SaaS as well as more general cloud computing services).

Beyond specific laws, regulations, and policies, the people I spoke with described an overall risk question that they asserted enterprises would raise: the risk associated with the cloud provider itself. Some people noted that Amazon's cloud offering isn't their core business. interestingly, however, they described Amazon's core business as "selling books." I think Amazon's business efforts are well beyond books and this response may indicate an unfamiliarity with the total range of Amazon's offerings; nevertheless, the question of Amazon's core competence and focus on computing is valid, and might even be more of an issue if the company is spread across many initiatives.

For the other cloud providers, which are probably considered more "traditional" technology companies, this issue of core competence and focus probably isn't a direct concern. It's still a concern, though, since one might discern that the cloud offering each provides is not its main business focus; therefore, the company might, in some future circumstance, decide that its cloud offering is a distraction or a financial drag and discontinue the service. Google's recent shuttering of several of its services gives credence to this type of concern.

So, all in all, there are a number of risk-related concerns that enterprises might have regarding their use of cloud computing, ranging from specific issues imposed by law or regulations to general operational risk imposed in dependency upon an outside provider.

However, many of the people who proffer these concerns do so eagerly and, to my mind, too broadly. Let me explain.

First, many of the legal and regulatory risks assigned to cloud providers are understood by them. They recognize that they will need to address them in order to attract mainstream business users. However, in order to get started and build experience and momentum, they have not focused on very challenging functionality and processes; instead, Amazon, for example, has been primarily targeted at startups and non-critical corporate apps.

To my mind, this is a smart strategy. One has only to look at SAP's protracted effort to deliver an on-demand service with equivalent features to its packaged offering to understand how attempting to meet demanding capability right out of the chute can seriously retard any progress. However, I am confident that cloud providers will continue to extend their capabilities in order to address these risk aspects.

Moreover, many people who discuss this type of risk characterize it as something that can only be addressed by internal data centers, i.e., the very nature of cloud computing precludes its ability to address risk characteristics. I spoke to a colleague, John Weathington, whose company, Excellent Management Systems, implements compliance systems to manage risk, and he questioned the notion that clouds are inherently unable to fit into a compliance framework, citing compliance as being a mix of policy, process, and technology. To his way of thinking, asserting that risk management cannot be aligned with cloud computing indicates a limited understanding of compliance management.

A second factor that too broadly characterizes cloud computing as too risky is an over-optimistic view of current risk management practices. In discussing this with John, he shared some examples where companies do not manage compliance properly (or, really, at all) in their internal IT systems. The old saw about people, glass houses, and stones seems applicable here. In a way, this attitude reflects a common human condition: underestimating the risks associated with current conditions while overestimating the risks of something new. However, criticizing cloud computing as incapable of supporting risk management while overlooking current risk management shortcomings doesn't really help, and can make the person criticizing look reactive rather than reflective.

Associated with this second factor, but different—a third factor—is the easy, but damaging approach of treating all risks like the very worst scenario. In other words, identifying some data requirement as clearly demanding onsite storage with heavy controls and reaching a general conclusion that cloud computing is too risky for every system. Pointing out some situations or data management requirements cannot be met by cloud computing poses the danger that leveraging the cloud will be rejected for all systems or scenarios. You may disbelieve that this kind of overly-broad assessment goes on, but I have heard people drop phrases like "what about HIPAA" into a conversation and then turn contentedly to other topics, confident that the issue has been disposed of.

Some of this reflexive risk assertion is understandable, though. The lack of enthusiasm on the part of many IT organizations to embrace external clouds due to the putative risk might be attributed to risk asymmetry they face. That is to say, they can get into a lot of trouble if something goes wrong about data, but there isn't that much upside for implementing a risk assessment process and reducing costs by leveraging outside cloud resources. One might say IT organizations are paid to be the worrywarts regarding data security, which isn't really that much fun, but would affect their perspective on risk and could motivate them to be very conservative on this subject.

However, given the very real pressures to examine cloud computing for reasons of IT agility and overall cost examination, resisting it by a bland contention that "cloud computing is too risky; after all, what about X?" where X is some law or regulation the organization operates under is probably not a good strategy.

So what should you do to address the issue of risk management in cloud computing?

One, understand what your risk and compliance requirements really are and how you address those things today in internal systems. Nothing looks worse that asserting that cloud computing isn't appropriate because of risk and being asked "how do we handle that today?" and not having a solid answer.

Second, (assuming you haven't done so already) a risk assessment mechanism to define levels of risk and make it part of the system development lifecycle. Without this, it's impossible to evaluate whether a given system is a good candidate for operating in the cloud.

Third, assess your potential cloud hosting operators for their risk management practices. With this in hand, projects can have their risk assessments mapped against the cloud provider and a decision can be reached about whether cloud hosting is appropriate for this system.

The cloud hosting risk assessment should be treated as a dynamic target, not a static situation. The entire field is developing quite rapidly, and today's evaluation will probably not be accurate six months hence.

Pressure is going to be applied to IT organizations over the next twelve months regarding costs and, particularly, whether cloud computing is being considered as a deployment option. With a risk management framework in place, appropriate decisions can be made—and justified.

Bernard Golden is CEO of consulting firm HyperStratus, which specializes in virtualization, cloud computing and related issues. He is also the author of "Virtualization for Dummies," the best-selling book on virtualization to date.

The Case Against Cloud Computing, Part One

Cloud Computingが市場に浸透する障害が何か、を整理したシリーズ記事。
全体として、次の問題が介在する、と整理している。
1) 既存のエンタプライズアプリがそう簡単にCloud Computingに移行できない
2) 法律上、ビジネス上のリスク
3) Cloud Computingアプリを管理する難しさ
4) SLA(Service Level Agreement)の欠如
5) Cloud Computingを理由する経済的なメリットの欠如
 
下記記事はその最初のシリーズで、1) について説明、次の点について言明している。
  • Amazon Web Services、Microsoft Azure、Salesforce.comのForce.com、Google AppEngineにしても、既存のエンタプライズアプリケーション環境と大きく異なる固有のインフラをもっており、アプリケーションの移行が非常に難しいのが現実。
  • P2C(Physical to Cloud) Migration Tool: 上記の移行を手伝うツール、というものが登場してもいいものだが、これも非常に開発が難しい上、複数のCloudアーキテクチャに対応できるものを作るのは困難。
  • Cloud Computingに精通したエンジニアを育てるのが難しい。
 
 

I've had a series of interesting conversations with people involved in cloud computing who, paradoxically, maintain that cloud computing is—at least today—inappropriate for enterprises.

I say paradoxically because each of them works for or represents a large technology company's cloud computing efforts, and one would think their role would motivate them to strongly advocate cloud adoption. So why the tepid enthusiasm? For a couple of them, cloud computing functionality is really not ready for prime time use by enterprises. For others, cloud computing is too ambiguous a term for enterprises to really understand what it means. For yet others, cloud computing doesn't—and may never—offer the necessary functional factors that enterprise IT requires. While I think the observations they've made are trenchant, I'm not sure I'm convinced by them as immutable problems that cannot be addressed.

I thought it would be worthwhile to summarize the discussions and identify and discuss each putative shortcoming. I've distilled their reservations and present them here. I've also added my commentary on each issue, noting a different interpretation of the issue that perhaps sheds a little less dramatic light upon it and identifies ways to mitigate the issue.

There are five key impediments to enterprise adoption of cloud computing, according to my conversations. I will discuss each in a separate posting for reasons of length. The five key impediments are:

  • Current enterprise apps can't be migrated conveniently

  • Risk: Legal, regulatory, and business

  • Difficulty of managing cloud applications

  • Lack of SLA

  • Lack of cost advantage for cloud computing

    Current enterprise apps can't be migrated conveniently. Each of the major cloud providers (Amazon Web Services, salesforce force, Google App Engine, and Microsoft Azure) imposes an architecture dissimilar to the common architectures of enterprise apps.

    Amazon Web Services offers the most flexibility in this regard because it provisions an "empty" image that you can put anything into, but nevertheless, applications cannot be easily moved due to its idiosyncratic storage framework, meaning they can't be easily migrated.

    Salesforce force.com is a development platform tied to a proprietary architecture deeply integrated with salesforce.com and unlike anything in a regular enterprise application. Google App is a python-based set of application services—fine if your application is written in python and tuned to the Google application services, but enterprise applications, even those written in python, are not already architected for this framework. Azure is a .NET-based architecture that offers services based on the existing Microsoft development framework, but it doesn't offer regular SQL RDBMS storage, thereby requiring a different application architecture, thus making it difficult to migrate existing enterprise applications to the environment.

    According to one person I spoke with, migrating applications out of internal data centers and into the cloud is the key interest driver for clouds among enterprises; once they find out how difficult it is to move an application to an external cloud, their enthusiasm dwindles.

    I would say that this is certainly a challenge for enterprises, since if it was easy to move applications into cloud environments, quick uptake would certainly be aided. And the motivation for some of the cloud providers to deliver their cloud offerings in the way they do is difficult to understand. Google's commitment to Python is a bit odd, since Python is by no means the most popular scripting language around. Google sometimes seems to decide something is technically superior and then to insist on it, despite evidence that it retards adoption. With regard to Salesforce, I can certainly understand someone with a commitment to the company's main offering deciding to leverage the force architecture to create add-ons, but it's unlikely that an existing app could be moved to force.com with any reasonable level of effort; certainly questions about proprietary lock-in would be present for any enterprise that might entertain writing a fresh app for the platform. It's quite surprising that Microsoft would not make it easy for users to deploy the same application locally or in Azure; while the Azure architecture enables many sophisticated applications, the lack of ability to easily migrate will dissuade many of Microsoft users from exploring the use of Azure.

    On the other hand, a different architecture than the now-accepted enterprise application architecture (leaving aside that current enterprise architectures are by no means fastened upon one alternative, so it's not as though the choice were between one universally adopted enterprise architecture and a set of dissimilar ones) doesn't necessarily mean that it is deficient or even too difficult to migrate an application to. It might be more appropriate to say that there is a degree of friction in migrating an existing application; that degree varies according to which target cloud offering one desires to migrate to.

    Certainly it seems well within technical capability for someone to develop a P2C (physical to cloud) migration tool that could all or much of the technical effort necessary for migration; of course, this tool would need to be able to translate to several different cloud architectures.

    Even if an automated tool does not become available, there is the potential for service providers to spring up to perform migration services efficiently and inexpensively.

    Naturally, performing this migration would not be free; either software must be purchased or services paid for. The point is that this is not an insurmountable problem. It is a well-bounded one.

    The more likely challenge regarding clouds imposing a different architecture is that of employee skills. Getting technical personnel up to speed on the requirements of cloud computing with respect to architecture, implementation, and operation is difficult: it is a fact that human capital is the most difficult kind to upgrade. However, cloud computing represents a new computing platform, and IT organizations have lived through platform transitions a number of times in the past. In these times of Windows developers being a dime a dozen, it's easy to forget that at one time, Windows NT skills were as difficult to locate as a needle in a haystack.

    On balance, the lack of a convenient migration path for existing applications is going to hinder cloud computing adoption, but doesn't represent a permanent barrier.

    Next posting: The challenge of risk: legal, regulatory, and business

  • 2009年2月25日水曜日

    Thinking Out Cloud: Open APIs and Cloud Computing Standards

    Cloud Computingのアプリケーション開発環境をオープン化しているベンダーが登場している。
     
    その中で、GoGrid社が非常にユニークが動きを見せていて、Cloud Computingベンダー間のインターオペラビリティをGoGrid APIを軸に展開している、との事。  既にFlexiscale社、Rightscale社、Eucalyptus社等が協業している。 
     

    My previous post, Beware Premature Elaboration, discussed some of the issues around cloud computing standards. If you want to hear more about the topic, and especially GoGrid, one of the most innovative cloud providers in the market, check out the Overcast Show #6. My co-host, James Urquhart, and I interview Randy Bias and Michael Sheehan of GoGrid.

    We discuss many topics (see Show Notes below), but I think the most interesting is the fact that GoGrid recently open sourced their API specification. As I said before, this is the right way to go. We have a number of open source APIs, including GoGrid, EUCALYPTUS, Enomaly and others, and now the market will start determining the winner.

    The GoGrid case is particularly interesting because of two main reasons. The first is that it is a proven API that is up and running in production and reflects a lot of experience from the trenches. The second is they are making a proactive attempt to reach out to other cloud providers and other members in the ecosystem to support their API, as Randy discusses in the podcast.

    Listen to the podcast here.

    Here are the full Show Notes:

    Show Notes:

    After a long hiatus we are back with the Overcast podcast. In Show #6 we have a discussion with Randy Bias and Michael Sheehan of GoGrid. Some of the topics we cover include:

    • The distinction that Randy and Michael have made in their blog posts between a Cloudcenter and Infrastructure Web Services, and the different approach GoGris is taking to cloud computing compared to Amazon Web Services
    • The role of Platform-as-a-Service players, such as Google App Engine, and a glimpse into some of GoGrid's thoughts on opening up their infrastructure for other to offer a PaaS on top of it 
    • Cloud computing standards -- do we need them now and it what is the correct approach for achieving them? 
    • GoGrid's recent open sourcing of their APIand their efforts to work with other cloud providers and related vendors and projects (such as FlexiscaleRightScale andEucalyptus) to rally around the GoGrid API
    • The role of projects such as Puppet andChef in cloud computing, and GoGrid's plans for these technologies 
    • GoGrid's Cloud Connect offering - a hybrid model for dedicated and cloud-style hosting  
    • GoGrid's storage offering 
    • and more... 

    Microsoft Windows Azure Pricing Expected Soon

    Microsoft Azureに関する情報があまり公開されていない状況の中で、価格帯がそろそろ発表されるだろう、という記事。
    一ユーザあたり、月額$18程度になる、という予測。
     
     

     

    Managers suggest the company's cloud computing efforts will be pay as you go, but customers can also prepay if they want discounts.

    By J. Nicholas Hoover,  InformationWeek
    Feb. 12, 2009
    URL: http://www.informationweek.com/story/showArticle.jhtml?articleID=214000086

    Microsoft will release pricing information on its Windows Azure soon and will also use Azure to power private cloud computing, a Microsoft executive told investors at a conference Wednesday.

    Doug Hauger, general manager of marketing and business strategy for Microsoft's cloud infrastructure services group, said Microsoft will announce pricing "soon," but added Windows Azure will cost less than the overall price of running a server internally. Azure will be pay as you go, he said, but customers can also prepay if they want discounts.

    As a corollary on pricing guidance, Hauger pointed to the example of Exchange Online, which is priced at $10 per user per month. Licensing the on-premises version of Exchange Server costs users $3 for Exchange Server per user per month, Microsoft says, but with additional costs (including infrastructure, IT staff, licensing and operational costs) Microsoft estimates it adds up to an average of $18 per user per month.

    That means Microsoft is confident Azure will be cheaper for customers and will bring Microsoft some benefits as well since it can run cloud apps cheaper than normally because of economies of scale associated with huge data centers, Hauger said.

    With prices lower than on-premises software, Hauger said the poor economy will make Azure and other cloud computing platforms more attractive to businesses, despite corporate misgivings about the cloud.

    "It's become fairly evident that companies are becoming far more pragmatic about trading off concerns that have never left about privacy and security with economic benefits," he said.

    Still, he admitted that that will be true only for a certain subset of applications, including backup , simple .Net-managed applications, disaster recovery, and certain computational functions like photo rendering and drug research. "Your core line of business applications won't be going to the cloud relatively soon unless it's through a private cloud," he said. In order to run on Windows Azure, many of those lines of business apps -- especially older ones -- will have to be rearchitected somewhat, Hauger said. Microsoft itself will provide companies with a private cloud option, Hauger added. It's still unclear how what exactly its offering will look or whether it will come under a different brand than Windows Azure. Microsoft might offer private clouds as some sort of service that sounds more like a dedicated Internet-based cloud than a premises private cloud.

    "You would have a private cloud run for a hospital that would be certified as HIPAA compliant and run by the vendor instead of run by a health care client," Hauger said. "We can get the economies of scale in managing that infrastructure and the system and still provide guarantees around the data."

    In response to a question about Microsoft's value proposition versus that of its competitors, Microsoft laid out its view of the cloud computing platform market.

    "There are companies at the low end that are essentially providing you with a raw set of resources, and they say, 'Hey, here's a VM, here's a server, good luck, and let us know if you have any problems,' " he said, pointing to Amazon.com and VMware. "Then there are a set of people in the pure-play space -- Google is an example in this one with AppEngine -- and so you build an application and it runs, but you have no control over the resources underneath that."

    Microsoft, he argued, straddles both of these worlds. "We give you some guarantees around the level of resources you get, the power of that VM you have, the amount of memory you get, etcetera," he said. "But you also get simplified systems management that then takes you into that pure-play cloud services realm."

    In addition, he said, Microsoft's cloud, unlike those of Amazon and Google, is designed to integrate with on-premises applications. Microsoft also has a huge on-premises installed base and partner community to draw on, he added.

    That said, Hauger said Microsoft doesn't expect to monopolize cloud computing like it has client-server computing, even if many companies decide to standardize on one cloud, as he expects will eventually happen.

    "Our cloud is completely based on standardized Web services, and in that sense I think enterprises will look for clouds that provide those standards and that interoperability," he said. "They don't want vendor lock-in, but they won't move, likely." He added that he expects there to be only a handful of major public cloud vendors a decade from now, rather than hundreds.

    The editors of Internet Evolution have published an independent analysis of the changing winds in cloud computing. Download the report here (registration required).

    Computing Now | News | February 2009 | A Match Made in Heaven: Gaming Enters the Cloud

    AMD社がCloud ComputingをインフラとしたFusion Render Cloud というゲームインフラ事業を始める、と発表。
    通常のCloud Computing環境と異なり、グラフィックを多用するアプリケーションを稼動するのに最適化されたシステムデザインとなる。
    CPUはAMDの
     
    Cloud Computingにゲームを持ち込む事によって大きなメリットがある。
    • 携帯電話など、CPUやGPUが貧弱な端末での高度なグラフィックのゲームが楽しめるため、利用する人数や頻度が格段に大きくなる。
    • PCに高性能なグラフィック機能を搭載しない一般ユーザでもオンラインゲームを楽しめるようになる。 市場シェアを広げているNetbooksなどが対象となる。
    • ソフトウェアをインストールせずにゲームを楽しめるようになる。
    高度なグラフィックを多用したアプリケーションをCloud Computing環境で提供できるようにする事によって、ゲーム業界のみならず、さまざまなアプリケーションにも採用されるインフラが作られるようになり、市場が大きく変遷する、と予測する人も多い。
     

    Online gaming has been implemented in several ways—through traditional consoles connected to the Internet, on PCs that usually require a downloaded client, and with browser games often created in Flash with limited graphics. All of that is about to change.

    Led by a stunning announcement by AMD at the Consumer Electronics Show last month, cloud computing is set to become the next frontier in the gaming world. Fusion Render Cloud, AMD's planned petaflop-busting supercomputer, is expected to stream the highest-quality graphics to the smallest devices, and figures to be the most intriguing and talked-about example of cloud gaming in the next few years should it proceed past the concept stage. Others are sure to follow in its wake, and some have already preceded it to the cloud. Playce, a startup that wants to revolutionize casual gaming, announced its plans for cloud gaming last September and is set to enter beta testing this month. Gaming platform Steam has taken a small step, offering an extension for users to store settings and configurations on the company's servers.

    Until recently, however, there has been little discussion about a marriage between gaming and cloud computing. Cloud services have primarily been geared toward enterprise and business needs, such as Salesforce.com and Google Apps, which make use of greater capacity and scalability but don't require large amounts of bandwidth.

    "There has always been the assumption that gaming will be irrelevant, you will never be able to do gaming in the cloud because it is so graphics intensive, especially 3D gaming like World of Warcraft," said Geva Perry, a cloud computing expert who has tracked the development of cloud gaming in his blog. "We are getting more and more bandwidth, and now we are seeing the ability to process intense computations on the server side, and this one is a huge breakthrough."

    Petaflop Power

    If anything, the concept for Fusion Render Cloud is certainly ambitious. The massively parallel supercomputer is expected to break the petaflop barrier—more than one quadrillion floating-point operations per second—to take a place among the fastest computers in the world. The prototype supercomputer will run on Phenom II processors, AMD 790 chipsets, and more than 1,000 ATI Radeon HD 4870 graphics processors, according to AMD. Another company, Otoy, is developing the software to render scalable server-side graphics that would be compressed and streamed in real time.

    All that technology is expected to coalesce into an idea nearly as powerful as the processing power behind it. "Imagine watching a movie half-way through on your cell phone while on the bus ride home, then, upon entering your home or apartment, switching over to your HDTV and continuing to watch the same movie from exactly where you left off, seamlessly, and at full-screen resolution," said AMD director of digital media Charlie Boswell. "Imagine playing the most visually intensive first-person-shooter game at the highest image-quality settings on your cell phone without ever having to download and install the software or use up valuable storage space or battery life with compute-intensive tasks."

    The ability to send high-end graphics to small devices is a key aspect of Fusion Render Cloud. Most technology pundits agree that portability is an increasingly important factor for today's computing needs, and the success of the iPhone ensures that consumers will want quality games that can fit in a pocket. Netbooks also figure to become increasingly popular, and cloud gaming would fit well with those devices, which lack powerful graphics cards.

    Otoy's technology would compress data streams using the GPU's massive parallel computing capability to reach those devices efficiently, according to AMD representative Gary Silcott. "Their testing to date has demonstrated that latency and bandwidth are manageable," he said.

    What's more, cloud computing could make games more accessible to casual users, many of whom don't make a big and constant investment in hardware in the same manner as hardcore gamers. "All these computer guys sort of worshipping the technology really just gets in the way of the common user," Boswell said in an Inquirer interview. "Fusion Render Cloud is going to bring in those people who have been excluded by the digital divide."

    Some game companies, including Electronic Arts, are already preparing content to be delivered through Fusion Render Cloud, expected to debut sometime in the latter half of 2009. Those games are likely to take full advantage of the platform, including virtual-world games with unlimited photorealistic detail.

    The technology won't be limited to games. Movies and other PC applications figure to get a boost from AMD's supercomputer, as special-effects designers take advantage of the system's combination of CPUs and GPUs.

    Sin City and Spy Kids director Robert Rodriguez is already a fan, expressing his support with AMD's announcement. "Having the means to create interactive eye-definition movie and game assets with Fusion Render Cloud and then to make them available to a broader audience through new distribution methods will bring about a renaissance in content creation and consumption," Rodriguez said.

    Essentially, the concept could provide moviemakers and game creators with a platform to create high-definition graphics without needing advanced equipment. "Leveraging the processing capability of the Fusion Render Cloud to output a high-definition scene could greatly accelerate the production time without having to have the assets in-house," Silcott said. "That's the radical change envisioned with this prototype. The capability becomes available to a much longer list of creative content developers."

    A Playce in the Cloud

    While a startup company like Playce doesn't have the resources to compete with a heavyweight such as AMD, its goal is to serve individual game designers who want to create small, socially oriented games that don't require a large time investment.

    "We wanted to create a completely different experience," cofounder Carmel Gerber said at TechCrunch 50 last year. "We believe that people should be able to send a link to a friend, jump to a site together, and play a really hot, immersive 3D game. No DVD, no download, and no cost."

    To do that, Playce is creating a platform based on georeferenced satellite imagery, using real-world locations as the basis for 3D games such as car races and first-person shooters. It's a concept the company likened to a combination of Playstation and Google Earth.

    "Our world is very exact," Carmel said. "That means every pixel has a world coordinate, and like other geoworld applications, every pixel also has an http address."

    The company has developed a streaming optimization technology to deliver its games through standard Internet connections. In an interview with VC Café, the company said the graphics engine would be capable of streaming three to four terabytes of data, roughly the same performance as Google Earth.

    The real test will be Playce's ability to attract developers, and the company thinks its model will easily catch on. Developers will have access to an API (currently using C++ but eventually featuring a custom language that doesn't require coding experience), advanced 3D tools, hosting, and marketing and monetization services. "The game developers that we spoke with estimated that we can save them between 50 and 70 percent of their game development costs," Gerber said. Developers would also be able to port games to different locations, enabling them to quickly produce sequels.

    "This is the beauty of the cloud," said Perry, who foresees platforms like Playce eventually catching on with larger companies. "Now if you have a concept for a game, you don't need to set up the whole infrastructure, which would be very complex and expensive. You just focus on what you do best, which is the creative aspect of it and the game design and so on."

    At this point, Playce has about 500 developers ready to participate in its beta, and the company is building up its available virtual reality worlds, with parts of Los Angeles and Asia ready to go.

    Game Over for Consoles?

    Some forms of cloud gaming figure to start as hybrids, evidenced by Steam Cloud, which rolled out last year. Beginning with the game Left 4 Dead, users' game options can be saved to Steam's servers, which makes it easier to install games on different computers or make upgrades without losing data. The company plans to eventually make the feature available on its back catalog.

    However, it may not be long—perhaps within a decade—before cloud gaming becomes the dominant platform and drives consoles to the brink of extinction. The news about Fusion Render Cloud had bloggers buzzing about an exciting future in which games are available wherever you go, on a variety of devices, with no need for storage or GPUs.

    "All you really need on your end is a screen and a keyboard or mouse or joystick," Perry said. "Everything else happens on the server side. I do think if you take this development and extrapolate it into the future, it will mean the end of gaming consoles. It may take years, but that is the direction it's going."

    Leaders in the gaming industry have sensed the change coming, but there is no consensus about how long consoles will survive. In a BBC article, Electronic Arts'' head of international publishing Gerhard Florin said gaming platforms must go open source to accommodate the new form of distribution. "We're platform agnostic and we definitely don't want to have one platform which is a walled garden," Florin said. "I am not sure how long we will have dedicated consoles—but we could be talking up to 15 years."

    AMD envisions its prototype as a complement for consoles, so there could still be a place for GPUs at the client level. "The Fusion Render Cloud concept is a gaming scenario that is very attractive for some usage models, but doesn't replace other options," Silcott said. "It is very early in the development of the prototype to be making projections on what usage will actually look like and how it will impact the market."

    Silcott listed a variety of reasons that GPUs could still be relevant, including sharing compute load with CPUs on general-purpose applications such as video transcoding, accelerating graphics in combination with Fusion Render Cloud, and reducing client performance demands.

    Ultimately, gamers might simply add cloud gaming to the variety of options currently available, using their Wii one day and hopping onto the cloud the next. At this point, the only certainty is that gaming will go through some exciting changes in the future.

    2009年2月18日水曜日

    Best-in-Class Retailers Reduce Online IT Costs Due to SaaS Deployments

    調査会社であるAberdeen社がリテール業界でのSaaS採用状況をアンケート調査し、結果をレポートで発表した。
     
    全体110社の内、1/3はSaaSを既に採用していると応えており、17%のコスト削減を達成している、との事。 
     
    代表的な会社で、New York & CompanyやBest Buy等があげられる。
     
    New York & Companyは大人向けのアパレル業者で、自社のWebプラットホーム全体をSaaSにアウトソースしている、との事。 
     
    Best Buyが米国最大の家電販売業者で、自社の顧客向けのコンテンツ管理をSaaSにアウトソースしている。 
     
    メリットとしては、次のような事が挙げられている。
    • アウトソースする事によってWebアプリケーションの高度な技術の採用を容易に行える
    • サービスの質の向上
    • システムのスケーラビリティの向上
    • 必要なリソースだけを導入する事によるTCOの低減
    SaaSの対象になっているのは、Webアプリケーションが主体になっているようである。
     
     

    Aberdeen, a Harte-Hanks Company (NYSE: HHS), surveyed 110 retailers between December 2008 and January 2009 to determine that post-SaaS-based deployments within several areas in the retail online technology value chain, Best-in-Class retailers reduced their IT costs by 17% (year-over-year). To obtain a complimentary copy of the report, visit: http://www.aberdeen.com/link/sponsor.asp?cid=5622.

    According to the retailers surveyed, one-third have deployed some level of SaaS delivery for their web commerce platform in terms of bandwidth, content management, search, comparison shopping, analytics, or other personalization tools. There are several examples of SaaS-based web platform models, such as the one adopted by New York & Company, where they've outsourced their entire web platform to a SaaS-model company, or Best Buy, which utilizes a SaaS vendor to manage their dynamic content management. That's two major companies using SaaS-deployed applications in very different, but very meaningful ways. "Under present economic conditions, SaaS-based web commerce applications do support the lean IT initiatives of retailers both in terms of reducing capital infrastructure and associated IT support costs," says Sahir Anand, senior retail analyst and author of the "SaaS in Retail" benchmark report.

    According to Anand, "From the end user's perspective, the key benefits are the opportunity to more freely try initiatives, remove technical complexity, and ensure the highest levels of service. Other benefits include vastly improved scalability, lower total cost of ownership (TCO) -- you only pay for what you use, and economies of scale. Due to many users of the same application, the vendor is incentivized to deliver functionality on a frequent basis and to resolve any bugs or other technical glitches immediately." Retailers can achieve the degree of online flexibility and system responsiveness now necessary to remain competitive. The current recessionary conditions have brought to the fore three key trends: 1) Migrate appropriate applications away from the retailer IT to a third-party SaaS vendor hosting and maintaining the application; 2) Align SaaS solutions with the company's vision, mission statement, and business model; 3) Place customer-facing, shopping-enhancing applications at the front of the priority list.

    In SaaS technology investments, web applications are far and away the leaders of the pack, with 65% of respondents indicating that this was the leading area of SaaS IT spend. This large number is a result of a confluence of factors: the "mainstreaming" of ecommerce across heretofore reticent socio-economic lines; the near pervasiveness of broadband; and a whole host of personalized applications often lumped under the umbrella of web 2.0 applications. These all contribute to a more satisfying, content-filled, dynamic and immersive ecommerce shopping experience.

    Savvis Unveils Cloud Compute Service

    データセンターの大手、Savvis社がSavvis Cloud Compute、と呼ばれる自社独自のCloud Computingサービス事業を開始した、という記事。 
     
    これは同社の競合であるTerremark Worldwide社がCloud Computing事業に参入にしたのを追ったもので、市場のニーズが高まっていることを示している。 
     
    企業向けのユースとして次のようなケースを提案している。
    • Webサーバのトラフィックが急上昇するケースの多いサイト(SaaSアプリケーション、金融トレーディングアプリケーション、e-Commerceサイト、等)
    • システム障害発生時にバックオフィスとしてデータのバックアップ
    • 社内開発アプリケーションの開発テスト環境

    Managed hosting specialist Savvis, Inc.(SVVS) today unveiled Savvis Cloud Compute, a new service that moves its utility computing service into the cloud, providing enterprise customers more flexibility in how they provision, manage and pay for services running in Savvis data centers.

    Central to the new offering is an improved customer portal that gives users more control in provisioning virtual compute and storage capabilities, and the ability to purchase fractional compute resources on demand by the instance with flexible month-to-month business terms.

    The new offering positions Savvis to compete in the emerging market for enterprise cloud computing, porting its existing utility computing operation to a cloud delivery and billing model.  

    "Utility computing was the forerunner for what evolved into the enterprise cloud computing solutions we are seeing today," said Bryan Doerr, Savvis Chief Technology Officer. "Savvis pioneered virtualized IT services in the network and the data center and is proud to continue expanding these capabilities with cloud-based IT infrastructure as a service. 

    Ex-Salesforce President Resurfaces At Xactly - Software - IT Channel News by CRN and VARBusiness

    Salesforce.ccom社が最近業績の低迷を機に幹部数人をLayoffする発表を行ったが、その一人である元Salesforce.comのPresidentだったSteve Cakebread氏がXactly, Inc.社のCEO/CAOとして就任した。
     
    Xactly社は営業支援ツールを提供するSaaSベンダー。  Salesforice.comのテクノロジーパートナーである上に、Oracleや他のCRM製品を保管するツールとして事業を展開。 
     

    Steve Cakebread, who unexpectedly resigned as president at Salesforce.com last week, has popped up as chief financial and chief administrative officer at on-demand software vendor Xactly.

    Cakebread will be responsible for all financial operations at Xactly, as well as for all legal, IT, facilities and human resource operations.

    San Jose, Calif.-based Xactly provides Software-as-a-Service applications for developing sales performance management programs, including sales quotas and territories, and sales incentives such as commissions. Data collected through the system can also be used to analyze sales through multiple channels.

    Xactly positions its applications as complementary to CRM systems from Salesforce, Oracle (NSDQ:ORCL) and other vendors, and considers itself to be a Salesforce technology partner.

    Cakebread served as Salesforce's CFO for six years and played a key role in taking the company public in 2004 and managing its finances as it grew its sales to $749 million -- a resume that startup Xactly undoubtedly found attractive as it grows toward a possible IPO. Cakebread "possesses a one-of-a-kind track record of financial-management success in today's software industry," said Xactly president and CEO Christopher Cabrera in a statement.

    Cakebread became Salesforce's president and chief strategy officer in late 2007. Last week, a Salesforce spokesman would only say that Cakebread resigned effective Feb. 1 for personal reasons.

    Open Source Software Business Models On The Cloud

    OSSがSaaS事業に大きく寄与している事を説明した記事。
    MapReduce、またそれと同様に無償で提供されるオープンソースが今後どのようにCloud Computing市場で登場するかが興味あるところ。

    Open Source Software Business Models On The Cloud

    There are strong synergies between Open Source Software (OSS) and cloud computing. The cloud makes it a great platform on which OSS business models ranging from powering the cloud to offer OSS as SaaS can flourish. There are many issues around licenses and IP indemnification and discussion around commercial open source software strategy to support progressive OSS business models. I do see the cloud computing as a catalyst in innovating OSS business models.

    Powering the cloud:
    OSS can power the cloud infrastructure similarly as it has been powering the on-premise infrastructure to let cloud vendors minimize the TCO. Not so discussed benefit of the OSS for cloud is the use of core algorithms such as MapReduce and Google Protocol Buffer that are core to the parallel computing and lightweight data exchange. There are hundreds of other open (source) standards and algorithms that are a perfect fit for powering the cloud.

    OSS lifecycle management: There is a disconnect between the source code repositories, design time tools, and application runtime. The cloud vendors have potential not only to provide an open source repository such as Sourceforge but also allow developers to build the code and deploy it on the cloud using the horsepower of the cloud computing. Such centralized access to a distributed computing makes it feasible to support the end-to-end OSS application lifecycle on single platform.

    OSS dissemination: Delivering pre-packaged and tested OSS bundles with the support and upgrades has been proven to be a successful business model for the vendors such as Redhat and Spikesource. Cloud as an OSS dissemination platform could allow the vendors to scale up their infrastructure and operations to disseminate the OSS to their customers. These vendors also have a strategic advantage in case their customers want to move their infrastructure to the cloud. This architectural approach will scale to support all kinds of customer deployments - cloud, on-premise, or side-by-side.

    The distributed computing capabilities of the cloud can also be used to perform static scans to identify the changes in the versions, track dependencies, minimize the time to run the regression tests etc. This could allow the companies such as Blackduck to significantly shorten their code scans for a variety of their offerings.

    Compose and run on the cloud: Vendors such as Coghead and Bungee Connect provide composition, development, and deployment of the tools and applications on the cloud. These are not OSS solutions but the OSS can build a similar business model as the commercial software to deliver the application lifecycle on the cloud.

    OSS as SaaS: This is the holy grail of all the OSS business models that I mentioned above. Don't just build, compose, or disseminate but deliver a true SaaS experience to all your users. In this kind of experience the "service" is free and open source. The monetization is not about consuming the services but use the OSS
    services as a base platform and provide value proposition on top of
    that. Using the cloud as an OSS business platform would allow companies to experiment with their offerings in a true try-before-you-buy sense.

     

    2009年2月13日金曜日

    Dell pairs with Xsigo on virtual I/O

    Xsigo Systems社はI/O Virtualization技術を開発しているベンダの一つ。
    最近、Dellとの契約を発表し、Dellが自社のサーバ事業でXsigo SystemsのVP780 I/O Directorを組み合わせて販売する事になった。 
     
    Xsigoは現在30社の顧客と、40社のトライアルユーザを持つ。 
     
    I/O仮想化はDell競合のHP、IBMが既に自社技術として保有しており、DellはXsigoを買収するのではないかという噂もある。 
     
     

    Xsigo Systems - which back in September 2007 launched an in-band I/O virtualization appliance that cuts the hard-coded links between servers, storage, and networks and completely virtualizes those connections - has landed its first tier one server partner and distributor: Dell.

    Dell is particularly excited by its partnership with Xsigo because unlike rivals IBM and Hewlett-Packard in the blade server market, it does not have its own I/O virtualization gadgetry. And technically speaking, Dell still doesn't have any, because it has not acquired the upstart Xsigo, which is itself a bit of a mystery unless the company's owners want more money than Dell is willing to pay. Presuming it was willing to pay at all, of course.

    The fact is, Dell needs what Xsigo has on the I/O virtualization front if it wants to compete with HP and IBM. HP created its VirtualConnect I/O virtualization for its c-Class BladeSystem blades and got the technology, which virtualizes the links from blade servers to network switches and SANs, out the door about six months late in February 2007. IBM got into the virtual I/O racket for its blade customers back in November 2007 with its own Open Fabric Manager for its BladeCenter blade servers.

    Both HP and IBM have reportedly dabbled with the Xsigo VP780 I/O director - IBM certified it as being compatible with its server lines soon after the 2007 launch, and HP followed suit soon after. But Dell has taken a more direct approach and inked a deal to resell the appliance to its customers as options with its PowerEdge servers and for its PowerVault and EqualLogic disk arrays.

    Because Xsigo's I/O Director is a rack-mounted appliance, it isn't restricted to blade servers, as the I/O virtualization technologies from HP and IBM currently are. IBM and HP could create appliances out of the hardware and software embodied in the respective VirtualConnect and Open Fabric Manager products and make appliances similar to what Xsigo has cooked up. And considering the number of rack servers that these companies still sell - and the vast installed base that would love to have easier server, storage, and network management made possible by virtualizing the links between machines without having to move to blades - it is a bit of a wonder why HP or IBM hasn't snapped up Xsigo. (Playing hard to get, no doubt).

    "What we are excited about with the Xsigo appliance is the openness," explains Rick Becker, vice president of software and solutions at Dell's Product Group. "This works across form factors and vendors, and in a heterogeneous data center, this can mange it all no matter what logo is on the box."

    Dell has integrated its OpenManage software to link into the Xsigo VP780 I/O Director appliance, and the two companies set up some combined reference accounts using their respective products together in the lead up to the announcement. Thus far, Xsigo hasn't hit the mainstream yet, with only 30 customers and another 40 trials. But getting the backing of Dell and having an architecture that can virtualize the network and SAN links for both physical and virtual servers, as the Xsigo appliance does, could mean that Xsigo has a much larger customer base soon.

    The base VP780 appliance from Xsigo costs $30,000 and comes in a 4U rack chassis. It has links to reach out to 24 server ports plus 15 I/O modules slots, each with four Gigabit Ethernet, one 10 Gigabit Ethernet, and two 4 Gb Fibre Channel ports. The I/O modules also include an SSL offload module. The Ethernet modules support iSCSI disk links, and the Fibre Channel links go out to SANs. The box can link to an expansion switch that allows it to be hooked up to hundreds of servers. On the server side, each server linking into the appliance has to be equipped with a virtual NIC and a virtual HBA, which uses InfiniBand as a transport back into the appliance.

    According to Becker, Dell is also toying with the idea of using the appliance in parallel supercomputing setups, since it can also do server-to-server switching at 20 Gb/sec dual-rate speeds on InfiniBand links.

    Dell's direct sales team and the Dell channel have been authorized to push the I/O virtualization appliance, and given the state of the economy, they are going to be focusing on the amount of iron and capital expenditures that the I/O appliance can save companies. In a typical setup of 120 servers, says Becker, the servers are each equipped with four Fibre Channel ports and eight Ethernet ports. (This sounds a bit heavy on the ports, but this is Dell's example, not mine).

    The I/O adapters for this - not including the servers and storage - come to $1.4m. Now, replacing that with two Xsigo virtual I/O appliances, which puts two Xsigo ports on each server and a single wire that carries Ethernet and Fibre Channel data, the capital outlay to link those same 120 servers to switches and their storage comes to $316,000. Now, add in operating expenses. Dell reckons all those adapters being gone means about $100,000 less in power costs over three years, and administration costs (assuming I/O gets tweaked twice a year on average per server) go down by $389,000 because the I/O is virtualized in the appliance and is much easier to tweak. That is a savings of $1.6m over three years for virtual I/O compared to physical I/O.

    The Dell-Xsigo partnership is non-exclusive. Which means IBM and HP can do similar deals tomorrow, if they want. Or maybe EMC or Cisco Systems will swoop down and buy Xsigo outright, just for spite. ®

    Two Managed Service Providers Merge

    Tribrige社とNavint社が合併した、という記事。 
    両社共に企業向けのERP、CRM運用を専門としており、それぞれのカバーしている領域を合併して全米をカバーできる企業として事業を継続する。 

    Two Managed Service Providers Merge

    managed service providers mergeTribrige and Navint have announced a merger of equals to form a national managed service provider that blankets the US. The merged company will specialize in MSP services, enterprise resource planning (ERP), customer relationship management (CRM) and Microsoft SharePoint services. This is the latest deal to land on the MSPmentor M&A Tracker, which follows mergers and acquisitions in the managed services market.

    First, a word of caution: I always worry when two companies say they are engaging in a "merger of equals." History shows there's no such thing: I realize the merger of equals may refer to dollars, cents and business size. But when it comes to successful mergers, one corporate culture typically survives and a single executive leader typically moves the company forward.

    Still, the merging companies are quick to note that they have similar cultures. According to a press release:

    For Tribridge and Navint, the merger is as much about blending corporate cultures as it is about expanding service delivery capabilities. All team members will continue in their respective roles, and offices in every region will remain open. Both firms boast a disciplined project implementation methodology, stemming from a strong "Big 5" consulting services heritage that launched the careers of its founders.

    Tribrige says it has a strong presence in the southeast and central US regions. Navint says it has a solid presence in the northeast and western regions. Snap the pieces together, and you get a national company that will be known as Tribridge.

    Terms of the deal were not disclosed.