2009年12月30日水曜日

2010〜2011年、セキュリティより信頼性(稼働率)が重要な課題になる:2009年のクラウド障害を見返ると、=>

かなりの数のダウンタイムが記事として上がってきている事に気づく。 

セキュリティの問題は考えても見れば、非常に単純で、既存のITインフラで適用されているソリューション(認証、暗号によるデータプライバシー、データの保全性、等)をクラウドで徹底するだけの話である、とこの記事は指摘。  小生もそれには基本的には合意する。 仮想化技術やクラウド特定のツール関連のセキュリティは若干の難しさはあるが、基本的にやる事は同じだと解釈する。  

システムの信頼性は少し話が違ってくる。 サーバ台数が数千、数万規模の大きいデータセンタでの運用になると、その信頼性を確保するための技術、ノウハウ、というものは、一般の企業の中のIT資産の信頼性とはかなり違ってくるのが事実。  今まで無かった問題に対する答えを探す話である。  どう取り組むかは、クラウド業者にとって死活問題になってくる。


Cloud Reliability Will Be Bigger than Cloud Security for 2010-11

Establishing multi-cloud reliability and fault tolerance

We have all the tools for securing information in a Cloud: establishing trust through identity, data privacy through encryption, and content integrity through signatures.  We are overly focused on Cloud Security  issues and less on reliability.  This is all about to change.  Following the outages experience by Amazon EC2 in 2009, another premiere cloud provide, Rackspace, suffered an outage on December 18.  Using technology such as Forum Systems XML/ Cloud gateways is essential for establishing multi-cloud reliability and fault tolerance.

Rackspace Cloud Computing Outage
— According to an Apparent Networks Performance Advisory issued today, cloud services provider Rackspace experienced a connectivity loss at its Dallas-Fort Worth data center on Dec. 18, 2009. Access to business services at that data center was not possible during the outage, which began at approximately 4:37 p.m. eastern time and lasted about 35 minutes. The Apparent Networks Performance Advisory is based on intelligence provided by the company's Cloud Performance Center, a free service that utilizes Apparent Network's PathView Cloud service to test the performance of cloud service providers such as Amazon, Google and GoGrid.

http://ow.ly/QMUi

2009年12月29日火曜日

Cloud Security Alliance(CSA)がクラウドセキュリティの指針の新しいバージョンを発行。もっと構造的なアプローチを狙い、理解しやすい=>

特に全体を構成する各ドメインはそれぞれビジネスとして考えれば製品がここに存在しうるテーマであり、今後クラウドの業界構造を解説するために使える素材になる。

標準化の遅れているクラウド業界であるが、そろそろ構造的なアプローチについて共通認識が出てきてもいい頃かもしれない。


Last week, the Cloud Security Alliance (CSA) released its Security Guidance for Critical Areas of Focus in Cloud Computing V2.1. This is a follow-on to first guidance document released only last April, which, gives you a sense of the speed at which cloud technology and techniques are moving. I was one of the contributors to this project.

The guidance explores the issues in cloud security from the perspective of 13 different domains:

Cloud Architecture

  • Domain 1: Cloud Computing Architectural Framework

Governing in the Cloud

  • Domain 2: Governance and Enterprise Risk Management
  • Domain 3: Legal and Electronic Discovery
  • Domain 4: Compliance and Audit
  • Domain 5: Information Lifecycle Management
  • Domain 6: Portability and Interoperability

Operating in the Cloud

  • Domain 7: Traditional Security, Business Continuity, and Disaster Recovery
  • Domain 8: Data Center Operations
  • Domain 9: Incident Response, Notification, and Remediation
  • Domain 10: Application Security
  • Domain 11: Encryption and Key Management
  • Domain 12: Identity and Access Management
  • Domain 13: Virtualization

I thought the domain classification was quite good because it serves to remind people that technology is only a small part of a cloud security strategy. I know that's become a terrible security cliche, but there's a difference between saying this and understanding what it really means. The CSA domain structure–even without the benefits of the guidance–at least serves as a concrete reminder of what's behind the slogan.

Have a close look at the guidance.  Read it; think about it; disagree with it; change it–but in the end, make it your own. Then share your experiences with the community. The guidance is an evolving document that is a product of a collective, volunteer effort. It's less political than a conventional standards effort (look though the contributors and you will find individuals, not companies). The group can move fast, and it doesn't need to be proscriptive like a standard–it's more a distillation of considerations and best practices. This one is worth tracking.

http://kscottmorrison.com/2009/12/23/cloud-security-alliance-guidance-v2-released/

2009年12月23日水曜日

Amazon Web Service のSAS70監査の問題点:監査結果を公開しないことに問題指摘

アマゾンのセキュリティに対する取り組みに対して指摘している記事は過去にも多く登場しており、総じてアマゾンの内部情報を公開しないポリシーとSAS70の要求する情報公開のポリシーの違いについて指摘している。 
下記記事はそれを割りと細かく分析し、具体的にアマゾンが取得した、というSAS70監査規準を公開していない、という問題についてかなり突っ込んでいる。 

Why is Amazon's SAS70 Audit Bogus?

At first glance it seems like Amazon's recent announcement of a successful SAS70 audit is grounds for celebration[1]. Certainly it has met with fanfare on Twitter and blogs.

Unfortunately, a SAS70 audit isn't what most people think it is.  Worse yet, Amazon's reluctance to provide details of the audit provides a false sense of security with no tangible benefits.

Let me explain.

Understanding the SAS70 Audit
The SAS70 is a methodology for performing an audit, not the audit rules themselves. The SAS70 can prove whatever you decide it needs to prove. From taking the garbage out to turning the lights on.

From Wikipedia:


SAS 70 defines the professional standards used by a service auditor
to assess the internal controls of a service organization and issue a service auditor's report.

Here's how it works.

For a SAS70, you must specify a series of "controls" and "control objectives". Like it sounds, you are asserting that a given 'control' meets a goal or objective.  An example of a control might be the 'new user creation process' or a 'firewall'.  An example of a control objective might be the following[2]:


The new user creation process MUST guarantee that a user's password
is at least 8 characters long and composed of a mix of at least one uppercase,
one lowercase, and one numerical character.

Once all of the control objectives are in place an outside auditor, like Deloitte & Touche, comes in and verifies that you are compliant with the stated control objectives over a period of time. If it is a Type 1 audit the period is 3 days. If it is a Type 2 the period is 6 months.

Now here's the rub: Who decides what the control objectives are? An outside agency? A regulatory body?

None of the above. The company being audited decides and can make the control objectives anything they like. Here's a SAS70 FAQ response on the topic right from the SAS70.com website.

Again, the SAS70 is just an auditing framework.  Why then do so many think it's useful?

Background on the SAS70 Audit
The SAS70 comes out of the financial industry and is a relatively generic framework for that reason. The financial industry has tons of different regulatory requirements that vary from state to state and country to country. Moreover, within the financial industry these kinds of audits are undertaken all of the time, the parties involved know what they are testing for, and how to negotiate it.

For example, a large bank might outsource work to a secondary institution and have a desire to see that institution provide proof they are following certain guidelines or regulations.  A good example is the Bank Secrecy Act. The large bank in this case knows what the BSA requires and how to evaluate the secondary institution's SAS70.  This knowledge allows them to assess secondary institution's level of compliance with the BSA. At the same time, the secondary institution is familiar with what its large partners will require and sets up its annual Type 2 to cover the 'usual suspects' of controls and control objectives.

So how did we get here?

Hosting Companies and the SAS70
In recent years as financial institutions began to outsource they required that various hosting (and other) businesses perform the audit as well.  Unlike their usual partners it hasn't been clear what hosters need to be compliant with. Because of this most folks have simply done these SAS70s as simple Type 1s that are one-offs. This allowed the hosters to keep their costs down while allowing the bank to outsource and the hosters to generate revenue.

Here's the problem: Cloud computing is ushering in whole new ways of delivering IT services.

It demands greater transparency than ever, especially when it comes to security. If the average person doesn't understand the SAS70 and if you don't provide your control objectives so that others can vet the objectives you sold then you are creating a false sense of security.

You could have one control objective that simply says: "we must keep the power in the data center on" and successfully pass by fulfilling that over 3 days or 6 months.

The Need For A Cloud Security Standard
There are a couple of security and IT standards that can be used as the basis for a good SAS70 audit.  For example there is CoBIT and the ISO27002 (formerly ISO17799).  There are probably  others I'm unfamiliar with.  Unfortunately, most of these standards really focus on the Enterprise and not on a multi-tenant public cloud or hosting companies, who have some issues specific to their particular business models and architectures.

So, even if Amazon used one of these, it's still not good enough for them to keep their controls and control objectives hidden from public view.  How are we to be certain that they are sufficient? [3]

Summary
Until there is a security standard for running a cloud then SAS70 audits with unpublished controls and control objectives like the recent AMZN announcement are simply smoke and mirrors.  They provide little or no real assurance to the average consumer of the AWS public cloud and serve only to provide a false sense of security.


[1] Even the recent refresh of the Amazon Security Whitepaper (PDF) does not include details on the controls or control objectives
[2] Been a while since I was involved in a SAS70 and there is a specific language they use that I've forgotten.  Did not find any examples on the net.  Appreciate clarifications in comments below if you have them.
[3] I think this raises a broader question, which is should any public cloud ever be allowed to keep their SAS70 controls and control objectives hidden?  There is a very nominal argument for security through obscurity, but the reality is that many people will have to see them anyway, so why not shed some light?

Post to Twitter

Cloudscaling / Mon, 16 Nov 2009 15:39:29 GMT

Sent from FeedDemon

2009年12月21日月曜日

Rackspace Hostingがクラウドサービスのシステム障害で、40分程度ダウン。頻度改善は難しくても、顧客対応はどんどんよく出来るはず。

障害の原因、対応を報告し、今後障害を繰り返さないための改善策について顧客説明することが今後要求される可能性がある。  サービス業である以上、顧客に対する説明として何が必要十分なのか、業界としてもある程度標準化がされるといいな、と思いつつ、この辺のレベルの低さはやはり前記のIDC市場調査の結果に繋がっていると思う。


Rackspace Outage Has Limited Impact

Rackspace experienced an outage yesterday--a recurring issue this year for the hosted data center provider--which took down a number of high profile sites including the popular blog site TechCrunch. No network is impervious to outages, but a company like Rackspace needs to provide consistent and reliable service.

Aside from TechCrunch, a number of other services and blog sites were impacted by the Rackspace outage, including 37signals, Brizzly, Robert Scoble's blog, sites hosted by Laughing Squid, Tumblr, and Mashable.

The Rackspace blog describes the root cause: "The issues resulted from a problem with a router used for peering and backbone connectivity located outside the data center at a peering facility, which handles approximately 20% of Rackspace's Dallas traffic."

The blog post goes on to explain that the router configuration error was part of final testing for data center integration between the Chicago and Dallas facilities, and that it should not have impacted operation during normal business hours. "The network integration of the facilities was scheduled to take place during the monthly maintenance window outside normal business hours, and today's incident occurred during final preparations."

The outage left many Rackspace customers saying "Hey! Who turned off the cloud?"

While a data center outage that impacts popular and well known sites is a black eye for cloud computing in general, the scope of the impact from this outage was relatively small. As this blog points out "Rackspace is small potatoes. Now it's a fast growing bag of potatoes, but still dinky. And the other catch: Rackspace is more about hosting than the cloud."

For customers that rely on Rackspace to host their servers--especially Web servers--it may seem very much as if the Internet went down when the Rackspace data center was unavailable. However, cloud computing services like Amazon EC2 and Microsoft Azure, and Internet keystones like Google, and Amazon were not impacted at all by the Rackspace outage.

Mistakes happen, but customers of Rackspace have a right to question the repeated outages and service interruptions. At least one Rackspace customer is also upset about a related issue pertaining to customer notification of network issues like this outage.

The customer's hosted servers were affected by the Rackspace outage and found out from customer complaints that its site had been unavailable for two hours. In a comment, the customer stated "We also pay Rackspace extra for a constant monitoring service that is supposed to immediately notify me by email or phone call if our server becomes inaccessible at any time. I was HIGHLY disturbed to find out that Rackspace actually SUPPRESSED these notifications from being sent to their customers for some strange reason."

The comment offers no evidence to support the claim that Rackspace intentionally withheld notification, and I have not had any feedback from Rackspace to confirm or deny the accusation. If it turns out to be true, it would damage Rackspace's credibility and customer service reputation.

The bottom line, though, is that Rackspace determined the cause of the problem and fixed it relatively quickly, and it provided status updates on the blog to keep customers informed. Even brief outages seem devastating to those affected by them, but they will happen, and when they do this is pretty much how you want them handled.

Tony Bradley tweets as @PCSecurityNews, and can be contacted at his Facebook page.

http://www.pcworld.com/businesscenter/article/185171/rackspace_outage_has_limited_impact.html

IDCの最新の市場調査によると、クラウド市場に2つの変化:クラウドに対する理解が深まった事と、景気悪化による予算縮小が拍車を掛けている事。さらに=>

クラウドを採用する最大の理由は、安く/早く実装できる、と言う点が大きく注目されている。

合わせて、不安に思っている面は、やはり前年の調査に引き続いて、セキュリティの問題、Availablilityの問題が残されている。

指して驚くような内容ではないが、今後景気が回復しないと予想以上にクラウドの採用のスピードが速くなる可能性も考えられる。  一生懸命、べき論を戦わせている間に、中途半端であってもどんどんクラウドサービスを提供し、増大するニーズに対応する、という選択もある、と思われる。


New IDC IT Cloud Services Survey: Top Benefits and Challenges

Posted by Frank Gens on December 15th, 2009

cloud_surveyThis year's IDC IT cloud services survey reveals many of the same perceptions about cloud benefits and challenges as seen in last year's survey.  But there are a few interesting  shifts this year, driven largely by: 1) budget pressure from the challenging economy, and 2) a growing sophistication in users' understanding of cloud services.

This year's survey was fielded, like last year's, from the IDC Enterprise Panel of IT executives and their line-of-business (LOB) colleagues.  The respondent population is very similar to that of last year's survey, validating comparisons with last year's results.

Economics and Adoption Speed Still Top Benefits; Standardization Moves Up

This year's survey shows, once again, that economic benefits are key drivers of IT cloud services adoption. Three of the top five benefits were about perceived cost advantages of the cloud model: pay for use (#1), payments streamed with use (#3) and shift of IT headcount and costs to the service provider (#5).

IDC Cloud Survey 2009 Benefits

CLICK IMAGE to ENLARGE


While pay-for-use slightly edged out last year's #1 – easy/fast to deploy – these two are essentially in a tie for #1. It's pretty safe to ascribe the slight edge for pay-for-use to the enormous pressure that the Great Recession has put on IT budgets, and the consequent increased focus on cloud economics in the minds of customers.  But it's still clear that speed/simplicity of adoption remains a key driver of demand for cloud services.

One benefit that moved up the list from last year's survey – from #6 to #4 – was the cloud model's ability to "encourage standard systems".  This upward movement reflects a growing sophistication in users' understanding of the cloud services model, and how it can apply to their environment.  One of the largest sources of IT complexity and cost is the huge sprawl of distinct, yet functionally redundant, systems and applications in most organizations.  It's an open secret that the lack of standardization – of things that could, and should, be standardized – is perhaps the number one brake on IT's ability to respond quickly and efficiently to businesses' changing needs.  Cloud services – by definition – are built on the premise of standard, shared systems.  This survey finding suggests that IT executives increasingly see, and will promote, standardization as an additional – and important – justification for migrating to both public and private cloud offerings.

Security, Availability and Performance Still Lead Challenges;  Cost and Lock-In Worries Rise

This year's top three IT cloud services challenges – security, availability and performance – also topped last year's challenges list.  Security is #1 again, and thus remains the top opportunity for IT suppliers to tackle as they position themselves as market leaders in the cloud era.

CLICK to ENLARGE

CLICK IMAGE to ENLARGE


Availability and performance were tied at  #2 last year, and are in the same dead heat again this year.  I wrap these two together under a label of "dependability".  This survey result is a clear call for suppliers to offer service level agreements, and – more important – service level assurance.  Consequently – as we noted in IDC Predictions 2010 – look for lots of traditional IT suppliers to charge more forcefully into the cloud services business in 2010, with a focus on "enterprise-grade" IT cloud services.

The next two challenges represent very interesting shifts from last year's survey.  At #4, users' concerns that the cloud model will actually cost them more, rose from #6 in last year's survey.  Cost worries may seem counterintuitive, given that economics show up very strongly on the "benefits" side of the ledger, but the reason is simple.  Smart IT executives to ask: "what if my end-users, enabled by the cloud model's self-service provisioning capabilities, use more than I (or they) have budgeted for?"  This concern opens up an excellent opportunity for suppliers to introduce services/solutions that help customers better anticipate, monitor and manage the real demands (and costs) of cloud services offerings.

Appearing at #5 on the challenges list is "lack of interoperability standards".  We didn't offer this as a choice in last year's survey, but it's an issue we've certainly been hearing a lot more about this year. Customers are wondering whether choosing cloud services will lead to the same kind of lock-in they've endured for decades, or whether standards will give them greater freedom of action in the cloud era. Interestingly, this concern about cloud standards is echoed in challenges #6 (bringing back in-house may be difficult) and #7 (hard to integrate with in-house IT).  Even though standards cut against the grain of many leading suppliers' traditional strategies (at least when it comes to standards that impact their core offerings), this survey suggests that suppliers who take a more aggressive and customer-friendly stance toward cloud standards may be able to grab larger market share at this important "crossing the chasm" stage of the cloud market.

Solving Challenges Will Define Cloud Market Leadership

Besides the specific points we've already discussed, there are two other important takeaways from this survey.

First, take a look again at the Benefits and Challenges charts above, and notice the percentage of respondents citing each benefit and challenge.  You'll see that there are a higher percentage of respondents identifying challenges than  benefits.  The midpoint of the benefits sits around 65%, while for the challenges it's 81%.  This doesn't diminish the strong benefits users see in the cloud model; but it does suggest that the hurdles loom just a bit larger in users' minds, slowing their adoption of IT cloud services.

And that takes us to the second, and concluding, takeaway:  given the very positive benefits users see in the cloud model, it's obvious that IT suppliers who most directly and effectively mitigate the cloud adoption challenges will be strongly positioned to take market share as the all-important "early majority" customers expand to the cloud. If I were an aspiring cloud services supplier, I'd be putting the "challenges" chart on my wall, and developing and rolling out offerings that start at the top (security), and move right down the list.

http://blogs.idc.com/ie/?p=730

2009年12月19日土曜日

応手のテレコム業界もクラウド事業に着手:Orange Business Services社がクラウド事業を発表

従来からExchange Serverのホスティング等の事業を行っていたが、今回の発表で、正式にクラウドアプリケーションをサポートするSaaS環境、クラウドストレージサービス、等の発足、運営を発表している。


Orange steps into cloud services

Orange Business Services is to make a major push into the cloud-computing services market over the next two years, the company announced on Friday.

From 2010, the operator will offer multinational companies a software-as-a-service application store, storage-as-a-service and other cloud services, in a bid to become an intermediary between existing cloud providers and large corporate customers.

"The network is the cloud, and our experience and expertise in network and communications services place us in the best position to deliver high-performance cloud-computing services to support our customers' transformation," Orange Business Services chief executive Barbara Dalibard said in a statement on Friday.

Orange Business Services already offers some cloud services, such as hosted Microsoft Exchange email, private cloud functionality and hosted security services. However, the company intends to roll out a dozen new services worldwide over the next 24 months, many of them targeting vertical markets.

At a briefing in Paris on Thursday, Dalibard — who is leaving the company to work for the French railways early in 2010 — said Orange Business Services, as a network operator, would be able to offer its customers "end-to-end service level agreements". Another key element of the company's strategy is to secure the usage of cloud applications for compliance reasons, she said.

Asked at the briefing how much Orange Business Services would invest in the infrastructure needed for its cloud push, Dalibard said the funding would come from the 10 percent of revenues that the operator typically earmarks for reinvestment in capital expenditure.

New servers are "key tools" for the company's plans, Dalibard said, adding that Orange Business Services was already using some of the new services internally. "We want to make sure we eat the cake we make," she said.

Partners in the new cloud push include Cisco, Microsoft, Citrix, EMC, VMware, IBM and HP, Dalibard said.

According to Dalibard, the roadmap for the new services will be laid out on a region-by-region basis over the coming months.

However, Orange Business Services' broader roadmap shows two waves of deployment. The first, which roughly corresponds with 2010, includes the application store, unified communication suites, and an upgraded VPN infrastructure. The second wave will include elements such as platform-as-a-service, which lets customers develop their applications in a pay-as-you-go cloud environment.

Also in the second wave, which a spokesman said corresponds roughly with 2011, will be the ability of Orange Business Services to manage its customers' PC and mobile phone "end user professional environments" and roll out social networking platforms for customers.

US-based operators such as Verizon and AT&T already offer corporate customers compute-as-a-service products, which let them use processing capacity in the cloud. In October, HP predicted to ZDNet UK that most large European telcos would start offering cloud services in 2010.

http://news.zdnet.co.uk/internet/0,1000000097,39944123,00.htm

Amazon Web ServiceがUS Navyで実施しているクラウドのセキュリティテストに合格した、という記事=>北米市場はこういうニュースに弱いのだ

一体どんなテストをしてどのように合格したのか、が全く記述されていないが、こういう事実だけが一人歩きしてアメリカのメディアで取り扱われるケースは過去にも見ている。 

民間市場は、公共事業が「合格印」を押すもの、特に国防省がOKというものには非常に弱い。

もちろん、それなりの条件を満たしたからこそ合格なんだろうけど、公共と民間とでは使い方が違うので必ずしも無条件で民間でも問題なく通用する、という考え方は無理があるのでは、と感じるが、どうも世の中そう動いていないようだ。


Amazon passes US Navy Cloud Computing test

  • US Navy completes successful security trial of Amazon Web Services
  • Further tests due in April as part of 'Trident Warrior '10'
  • DoD increasingly positive towards Cloud following RACE launch in 2008

EC2 and S3 web services successfully pass the secure Cloud Computing test during trials.

The United States Department of Defense's (DoD) march towards Cloud Computing adoption has taken a further step forward this week, with confirmation that a commercially available Infrastructure as a Platform (IaaS) would meet with Department of the Navy standards for "global connectivity, server failover, and application access."

http://www.businesscloud9.com/topic/infrastructure/amazon-passes-us-navy-cloud-computing-test

Amazon Web ServicesのSpot Pricingの価格を調べることが出来るCloud Exchangeというサイト:まるで株価の変動を見ているみたいだ。

このサイト、AWSが引用しているAvailability Zone毎に一時間後とのSpot Pricingの公表価格を時系列に表示している。

また、Llinux、Windowsそれぞれの環境でも分けていて、見ていて非常に面白い。

http://cloudexchange.org/

いろんなパターンがあるが、大体は価格が10~20%程度の幅を行ったり来たりする比較的安定しているグラフもあるが、ものによっては、非常に短期間の間に値段が5倍以上に跳ね上がるケースもある。

今、業界でこのグラフを構築する情報を元に、自動的に安いコンピューティングパワーを取得するプログラムを開発しようとしている人が急増しているようだ。 

 

ちなみに、似たようなことがスマートグリッドでも起きることが予想されている。 電気料金が需要に合わせて可変するメカニズムが早々に採用される話であり、電気料金の安いときにEVの充電をしよう、というプログラムの開発等が議論されている。

2009年12月18日金曜日

Appistry社の2010年の予測:2010年はPaaSの年!: 待てよ。AppistryはPaaSベンダーなのでこう言うのも当然か。。。

2009年から比べて、2010年は大きな変化が起きる、というのは小生も同意。

ただ、標準化は最後まで出来ない、というのが小生の意見。


Appistry Makes 2010 Predictions for Cloud Computing

Cloud Computing

In 2009, Appistry, the provider of enterprise cloud computing services, provided their predictions. For 2010, the company once again looks ahead to where they see the cloud computing paradigms shifting.

2009 Predictions
-- The "year of the cloud" for the enterprise
-- Cloud platforms begin to overtake App Server
-- Several organizations offer cloud standards but debate to rage on
-- HP works its way into middleware
-- Amazon enters platform arena with new tools

2010 Predictions
-- Market disruptions accelerate into 2010
-- The "cloud" experiences growing pains
-- 2010 – the year of Platform as a Service
-- The Cloud Shapes Data, Data Shapes the Cloud
-- Put up or shut up year for tech firms; A bubble looming (Are grow rate predictions too high for public cloud companies?)

http://apb.directionsmag.com/archives/7028-Appistry-Makes-2010-Predictions-for-Cloud-Computing.html

PaaSは広がる、という記事=>人によってPaaSに大きな期待を持つケースもある。

このレポートを出しているのはResearch and Market社


PaaS popularity predicted to rise


Updated: 2009-12-16

A range of PaaS solutions will start to challenge the dominance of SaaS over the next several years as companies vie for greater flexibility over their cloud computing applications, according to a recent report.

Popular web application frameworks will spin off into a number of different PaaS platforms, and these alternative cloud computing platforms will capture a greater degree of market share over the next three to five years as developers look for more ways to customize their cloud computing deployments, a Research and Markets report found.

"Over the next three to five years, the rationale for building applications on top of SaaS-provided PaaS will become less compelling," Research and Markets said.

The report predicted that PaaS providers such as J2EE, SpringSource, .NET and LAMP would start to eat into the market share of Force.com, the leading cloud computing platform, as developers gain expertise and want more control over their applications.

Cloud computing has become one of the fastest-growing information technology sectors in recent years as software developers have adjusted to the opportunities presented by broadband internet connections. Major PaaS providers such as Salesforce.com allow enterprises to run software applications over the internet in order to save valuable in-house resources.

http://www.edlconsulting.com/newsdetail.php?id=567&headline=PaaS_popularity_predicted_to_rise

PaaSベンダーが増加する中、このレイヤーでのベンダーロックインを懸念する声も多い

という事を開設した調査レポートに関する記事。

Force.comも同様に懸念している企業、ベンダーも多い。


PaaS Remains on the Edge
Tech Strategy Partners, Dec 2009, Pages: 41


How Force.com, Workday, NetSuite and Intuit fit into the emerging enterprise application platform battle for corporate and ISV developers

PaaS remains on the Edge - Why independent software vendors are reluctant to embrace Force.com and why Workday and NetSuite hold promise for corporate developers

At its user conference DreamForce'09, Salesforce.com released some impressive statistics on the traction that its Force.com platform has been gathering. The company claims 135,000 custom applications and 10,000 sites are built on Force.com. Already, 55% of the HTTPS transactions the company processes come through the API (i.e., from partner applications) versus only 45% coming from Salesforce's own applications.

What drives that adoption and how does Force.com stack up against the alternatives?

New research by analyst firm Tech Strategy Partners contrasts the strengths and weaknesses of Force.com vs. platforms provided by NetSuite, Workday and Intuit. Tech Strategy Partners finds that PaaS is gaining most traction with corporate developers, not independent software vendors (ISVs).

Force.com benefits both corporate and ISV developers

Increased developer productivity is often seen as the biggest benefit, since developers can focus their effort on differentiating, customer-facing functionality, not application 'plumbing'. Developer can build on existing, pre-defined data objects, security models, user interfaces, business processes and automated management. Building on Force.com reduces average time to deployment by 60% and cost by 54% compared to conventional web-based development platforms like .NET and J2EE.

Improved application manageability is another key benefit. The parent SaaS provider is economically incented to innovate around reducing complexity and automating IT operations, since it is paid a fixed monthly subscription. Moreover, the parent SaaS provider is able to deliver on this because it controls the entire application lifecycle, from design (on its PaaS platform) to operations (in its own data centers).

Additional benefits such as CapEx avoidance and ISV access to the PaaS vendor's online market place complement the benefits.

But ISV adoption of PaaS will remain mostly at the edge

ISVs that consider building SaaS applications on these PaaS platforms are tying their commercial and technical future to a dominant partner who may or may not remain friendly in the future. Since PaaS vendors are likely to make far more money selling their SaaS application than their PaaS service, they are structurally incented to grow their SaaS footprint, which may eventually encroach on adjacent ISV partners. The direct cost charged by the PaaS vendor for use of functionality and data center infrastructure pales in comparison to these concerns about dependency.

Therefore, mainstream ISVs are unlikely to develop true standalone applications on these PaaS platforms. Rather, ISV applications will typically be adjacent to the PaaS vendor's SaaS applications, a situation the industry refers to as 'edge'. Edge applications might include marketing automation, talent management, project management, analytics, or collaboration. These edge applications provide a complementary process and synchronize key data objects with the core application. Core applications such as enterprise resource planning, supply chain management, and other transaction processing or business critical applications are unlikely to be built on another SaaS vendor's platform.

Corporate developer adoption currently dominates

Corporate developers are usually less concerned than ISVs about long-term technology platform lock-in and are not tying their business model to that of the ISV. We estimate that most of the 188 million lines of Force.com code today come from corporate developers. Many of these would have previously been built on J2EE or .NET and a SQL database.

Advantage of SaaS-vendor provided platforms may be temporary

Over the next 3-5 years, the rationale for building applications on top of SaaS-provided PaaS will become less compelling. We believe that alternative PaaS platforms, evolving from popular web application frameworks, will likely capture greater share among corporate developers and ISVs. As J2EE, SpringSource/VMware, .NET, and LAMP mature into self-managing PaaS services, spanning across hybrid on-premise and cloud deployments, they will come to provide many of the current PaaS benefits while allowing developers to retain much greater control than most SaaS-based platforms. Moreover, these alternative platforms will offer the path of least resistance for the millions of developers trained in these traditional technologies.

New report provides detailed evaluation across Force.com, NetSuite, Workday and Intuit

Tech Strategy Partners' new 41 page report provides a detailed evaluation of competing enterprise application platforms (PaaS) both from a business model / commercial as well as from an architectural / technical perspective. The report is a must-read for ISVs as well as enterprise developers, architects and application owners. A detailed evaluation matrix stipulates which criteria matter for different use cases. A side-by-side platform comparison contrasts strengths and weaknesses for Force.com, NetSuite, Workday and Intuit, again both commercially and technically. In-depth profiles provide further analysis on each of these platforms. And finally, Tech Strategy Partners predicts qualitatively where the enterprise applications platform is headed in the next five years.

Introduction and Context

Cloud computing is commonly categorized as Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a-Service (SaaS). Most of the recent debate has centered on IaaS while the importance of and different approaches to PaaS for corporate and independent software vendor (ISV) developers has garnered much less attention. IaaS helps improve capital and operating efficiency via shared infrastructure and infrastructure management automation. But the real operating efficiency gains don't happen until the applications themselves become part of the service being managed. That brings us to PaaS.

PaaS as provided by SaaS vendors Salesforce.com, Workday, Netsuite and Intuit is likely to remain primarily an 'edge' phenomenon when used by other ISVs. It appears unlikely mainstream ISVs will develop true standalone applications on these PaaS platforms in the near future. Rather, these PaaS services will find most success with complementary applications that work at the periphery. These applications will typically be adjacent to the PaaS vendor's SaaS applications, a situation the industry refers to as 'edge'.

SaaS-based PaaS vendors are likely to have far more success with corporate developers over the next 3-5 years. Once the SaaS vendors fully open up their application content as Force.com has done, corporate developers are likely to be less concerned with business and technology risk than ISVs.

Longer term, alternative PaaS platforms that can span hybrid cloud and on-premise deployments and that are built on traditional web application frameworks growing out of J2EE, Spring, .NET, and LAMP are likely to capture the most interest from corporate developers and ISVs. These platforms will offer the path of least resistance for the millions of developers trained in these traditional technologies. J2EE developers who understand scale up SQL databases for instance are unlikely to adjust to a completely different scale out programming model embodied by Google AppEngine.

Products Mentioned:
- VMware's vCloud
- Oracle CRM On Demand
- Force.com
- Workday
- NetSuite

http://www.researchandmarkets.com/research/333304/paas_remains_on_th

SymantecのクラウドセキュリティはAmazon EC2上で提供されることに

今更、クラウドインフラを自分で構築してサービスを展開するのは無理になっている、という事を表す一つの動き、といえる。  大手のクラウドベンダを利用し、クラウド上のアプリケーションに資源を集中する事が戦略である。


Symantec Offers Cloud Computing Security

Symantec announced it is offering its next-generation security and enterprise-class storage management solutions

Symantec Session at Cloud Expo

Symantec announced it is offering its next-generation security and enterprise-class storage management solutions through the Amazon Elastic Compute Cloud (Amazon EC2).

Symantec Endpoint Protection and Veritas Storage Foundation Basic are now available on Amazon EC2.

Businesses can leverage the Symantec solutions to add additional protection to their Windows servers in the cloud with comprehensive threat prevention and manage their cloud storage online with a single toolset that delivers reliability, scalability and high performance.

"As many businesses increasingly leverage the cloud for applications and services, they want to protect and manage those environments with the security and storage management solutions they are used to from Symantec," said Greg Hughes, group president, Enterprise Product Group, Symantec. "By taking the same proven security and storage management solutions that organizations have come to rely on in their data center and extending them to Amazon EC2, Symantec is delivering on its commitment to provide value in the cloud."

"As a web service that provides resizable compute capability on demand, Amazon EC2 makes web-scale computing easier for customers of all sizes," said Steve Rabuchin, General Manager of Developer Relations and Business Development for Amazon Web Services (AWS). "We're pleased that our mutual customers can now extend familiar Symantec security and online storage management solutions to the AWS cloud."

Amazon EC2 users now have access to key protection technologies provided by Symantec Endpoint Protection. Symantec Endpoint Protection combines Symantec AntiVirus with advanced threat prevention to deliver defense against malicious attacks such as viruses, worms, spyware, Trojans, zero-day threats, and rootkits. Symantec Endpoint Protection helps ensure information remains safe and business assets are protected wherever that information resides.

Amazon EC2 users also now have access to advanced online storage management capabilities provided by Veritas Storage Foundation Basic from Symantec, allowing them to manage multiple hosts from a central interface and optimize storage performance and availability online. Storage Foundation enables non-disruptive storage operations through GUI-based management and online configuration with dynamic disks.

"We have been running Symantec Endpoint Protection locally to secure the endpoints and servers in our computing environment and have been very pleased with the level of protection it has provided," said David Jordan, CISO of Arlington County. "As our infrastructure becomes more of a mix between on-premise and off-premise offerings, we look forward to leveraging these new delivery models for security and storage solutions."

Today's announcement marks another significant step in Symantec's cloud strategy to deliver customers unmatched choice in the adoption of cloud solutions based on the company's enterprise class products. For more information, please visit http://www.symantec.com/cloud.

http://cloudcomputing.sys-con.com/node/1218689

HPのクラウド戦略:企業向けのハイブリッドクラウドソリューションが中核

あくまでもエンタプライズに向けたソリューションを主軸としながらも、Amazon Web Serviceのサービスの使えるようなソリューションを提供する点、フレキシビリティをアピールしている。


HP Accelerates Cloud Computing Adoption

It enables businesses and service providers to lower barriers and accelerate the time to benefit of cloud computing adoption

HP Sessions at Cloud Expo

HP announced three new offerings to help businesses and telecommunication service providers realize the benefits of cloud computing while optimizing costs and mitigating the risk of adoption.

Research conducted on behalf of HP shows that more than 90 percent of senior business decision makers believe business cycles will continue to be unpredictable in the next few years.

As a result, 75 percent of the surveyed chief information officers (CIOs) acknowledge the need to invest in more flexible technology, be able to scale it up and down rapidly, and communicate faster with technology partners.

HP is addressing this need with a new set of offerings that enable businesses and service providers to lower barriers and accelerate the time to benefit of cloud computing adoption. These offerings extend the following key advantages:

  • Elasticity – rapidly respond to changing business needs with seamless and automated provisioning of cloud and physical services;
  • Cost control – optimize and gain predictability of costs by ensuring cloud compute resources are "right sized" to support fluctuating business demands; and
  • Risk mitigation – reduce manual errors, non-compliance and business downtime through automated service provisioning.

"There is no doubt that the cloud is a disruptive technology, promising enterprises of all sizes the speed and agility of a startup with the resources and scale of an enterprise," said Frank Gens, senior vice president and chief analyst, IDC. "Adoption, however, is limited by uncertainties surrounding risks and rewards. Customers want assurance and a safe path to cloud adoption that will address potential risks of security, performance and availability, while providing clear return on investment."

With its new offerings, HP will:

  • Help businesses govern and manage cloud services using HP Operations Orchestration and HP Cloud Assure for cost control; and
  • Enable providers to deliver cloud services with HP Communications as a Service.

HP solutions for businesses
HP Operations Orchestration automates the provisioning of services within the existing infrastructure – whether physical, virtual or cloud based. Businesses can seamlessly increase capacity through integration with "pay as you go" Amazon Elastic Compute Cloud (Amazon EC2), allowing rapid response to changing business conditions.

HP Cloud Assure for cost control provides cloud consumers the assurance that they are optimizing cloud costs and achieving the predictability necessary to budget appropriately. This solution enables cloud consumers to "right size" their various compute footprints, ensuring all service levels are met at the most optimized and predictable cost.

HP solution for telecommunication service providers
HP Communications as a Service (CaaS) is a cloud program that enables service providers to offer small- and midsize businesses services delivered on an outsourced basis and priced like utilities. HP CaaS helps service providers grow enterprise-related revenue by offering customers a low-cost, low-risk way to adopt cloud services.

HP CaaS includes an aggregation platform, four integrated communications services from HP and third parties, as well as the flexibility to add other on-demand services.

"CIOs understand the potential business benefits of cloud computing but are challenged with how best to manage the risks associated with adoption," said Thomas E. Hogan, executive vice president, Software and Solutions, HP. "The new HP offerings help assess and mitigate those concerns, driving greater business value by breaking down the adoption barriers related to cost and performance."

http://cloudcomputing.sys-con.com/node/1222482

2009年12月17日木曜日

Veraris社の行く末: どうもいくつかの噂が飛んでいるようだ。倒産/ 売却?正式見解はリストラ中

しかしながら、どうも事業としては停止している状況のようだ。
いくつか大型の案件で受注を取っていたが、最近は他のコンテナデータセンタ事業者の動きに少し負け気味だった、という印象が強い。
 
特に噂が大きくなったのは、SC09コンファレンスのブースに登場しなかった事が要因。
 
Sierra Ventures、Carlyle Venture Partners、Voyager Capital、Celerity Partners等から合計$54Mを超える資金を調達したこのベンダー、日本市場にも早くから進出を狙い、活動していた、と記憶するが、実に勿体無いですな。
 
他のコンテナ技術の行く末はどうなるのか、大変気になる。


What Happens to Verari's Technology?

December 14th, 2009 : Rich Miller

The Verari data center container housing the NASA Nebula cloud computing application arrives at Ames Research Center in Mountain View, Calif.

The Verari data center container housing the NASA Nebula cloud computing application arrives at Ames Research Center in Mountain View, Calif.

There has been speculation about the financial health of high performance computing specialist Verari Systems ever since the company failed to show at the SC09 conference, where it had purchased a booth. Most employees have been let go, and reports Friday on Twitter and Inside HPC indicated Verari had shut down.

On Monday the company issued a statement, saying it is reorganizing. "Verari has initiated a process that will protect our customers investment and benefit our creditors as we restructure the business," the company said on its web site. "The intention is to safeguard customers investment and provide an ongoing support capability. There are several options that are being considered to provide solutions to our customers. We expect to have the new plan in place soon."

Earlier Monday, Verari CEO David Wright told InsideHPC that the company is in a "controlled reorganization" and won't file for Chapter 11 or Chapter 7 bankruptcy. Wright says Verari is still open for business and working on a support plan for existing customers.

クラウドの近未来:他のサービス産業で使われている手法がクラウドに導入される可能性=> これ面白そうですよ

本記事で提案されているのは価格だけではなく、今日他のサービスビジネスで採用され、効果をあげている手法。
  • Latency等の諸条件による価格の違い
  • ボリュームディスカウント
  • プリペイド、前売り予約など、事前に購入する事による割引
  • オーバーブッキング(必ずキャンセルが出る事を想定し、過度に注文を入れる)AWSが既に実施している。
  • お得意様には自動的なアップグレード
  • ニーズの高低に合わせて価格を変動。 期限限定のプロモーションなども含む
  • 変動する価格をある程度予測できる仕組み
  • クラウドサービスのデリバティブ化やヘッジ化
  • クラウドリソースの交換市場
  • ブローカー業者等の中間的な代行業者の登場
  • クラウド事業のOEM業(これは実際に発生している)
  • もっと単純な価格帯(例: 定額料金制度、夜間使い放題、等)
ここまでくれば本当にクラウドのユーティリティ化、という事が出来る、と思う。

Hedging Your Options for the Cloud


With the second decade of the millennium now just weeks away, I thought I'd offer up some possibilities for the cloud computing market as it continues to evolve. Cloud services — whether infrastructure, platform or software — share similarities with other on-demand, pay-per-use offerings such as airlines or car rentals. But what's past in those industries may be prologue for the cloud. Here are some key aspects of those services that could become integral to the cloud in the coming decade:

Non-uniform Pricing — Since the costs of real estate, power, cooling, carbon emissions and bandwidth may be location-dependent, shouldn't prices of cloud services vary based on cost differences between locations? In addition, a hotel room with an ocean view is priced higher than the one across the hallway next to a parking lot, and not because the mattress costs more, but due to value-based pricing. Similarly, location can mean everything when latency is important, which is why some cloud providers are offering services near stock exchanges where microseconds might mean millions. Time-based pricing should also come into play. Shouldn't computing cycles at 2 a.m be priced lower than ones at 2 p.m.?

Volume Discounts — Buying more resources at a given time, or the same quantity for a longer period, should entitle the customer to a lower price, since the risk of unused provider capacity is lower.

Reservation Protocols — Customers who show up at a hotel without a reservation risk sleeping in their car. Hotels that accept reservations with no or refundable deposits risk no-shows and lower utilization. Reciprocal commitment exacted via retainers, pre-payment, or non-refundable guaranteed reservations enhances provider financials and provides benefits to customers such as assured availability and discounts.

Oversubscription — Buying an airline ticket is different than flying. If not all reservations are used, providers can maximize revenue yield by overbooking, even if credits or penalties are occasionally paid. And, this is not a bad thing, since real users of limited resources are not blocked by users with unactualized intent.

Space-Available Upgrades — Airlines award empty business class seats to frequent flyers, enhancing customer loyalty. Perhaps cloud providers should consider the same, e.g., a free extra copy of a data object, space permitting?

Dynamic Pricing — Finite, perishable capacity — such as airline seats and hotel rooms — drives firms to use sophisticated yield management algorithms to maximize revenue, reducing prices to increase demand when utilization is low, and raising prices when utilization is high. Congestion pricing to discourage peak use — whether of city streets or electricity — and promotions, e.g., sales, to encourage use help smooth demand, improve utilization, and therefore optimize economics.

Capacity and Rate Transparency — Dynamic pricing requires rate transparency. No one wants to be surprised when the bill comes. "Click to view available seats" and "five seats left at this price" provide customers information that can help them plan, or accelerate, purchase decisions.

Discretionary Processing and Auctions — Much computing must be done on demand. However, in the same way that $79 fares to the Caribbean make people reconsider their weekend plans to shovel snow, companies increasingly will be able to decide how much processing to do for some workloads by placing auction bids or based on spot prices for computing — consider complex optimization problems where more computing results in better results, but with diminishing returns.

Derivatives and Hedging — Options and futures exist for equities, commodities and currencies ranging from pork bellies to pesos, so why not derivatives for network, compute and storage? Jet fuel is strategic to airlines just as IT is for many firms, and the same way airlines hedge against jet fuel price increases, an e-tailer might hedge against CPU core price increases for the holiday season. Such futures and "options for the cloud" could mitigate price risk, via hedges that protect against dynamic pricing and market vagaries.

Markets — How to trade capacity and derivatives? Why, spot markets and auctions and option markets, of course, such as BuySellBandwidth.com. Cloud service providers or enterprises may want to trade future capacity and protect against smoking hole disasters by acquiring options for capacity from other providers. Coming soon, the New York Server-Hour Exchange?

Volatility — As recent times no doubt illustrate, meltdowns and irrational exuberance are inherent to markets composed of traders with attitudes. In Ubiquity, Mark Buchanan reports on research conducted by two "econo-physicists," who modeled a simple market in one stock, with three types of traders — optimists, pessimists, and value investors — who could shift their orientation. Even in such a simple model, bull, bear, and chaotically volatile market behavior emerged.

Aggregation, Cooperation, Brokerage, Arbitrage, VARs, and Other Intermediaries — New market ecosystem roles will evolve. Aggregators may buy capacity at volume discounts and resell smaller quantities ("break bulk") to make money. Cooperatives such as the Enterprise Cloud Buyers Council may wield buying power to save money. Like travel web sites, brokers will arise to resell, package, compare, crowdsource reviews, and recommend providers, and VARS will add value to wholesale cloud computing capacity. Arbitrageurs and "high-frequency" traders may arise to make money on instantaneous market imbalances.

Virtual Cloud Operators — Airlines sell codeshare partners' capacity, increasing the apparent breadth of their portfolio and boosting revenue. One model has SaaS (software as a service) providers white-labeling other SaaS providers' offerings. Or SaaS might run on another provider's infrastructure as a service (IaaS) — virtual IaaS operators might physically reside on other operators' infrastructure.

Co-generation — If major companies can generate power for the grid from their generators during lulls in internal demand, they may also be able to sell their unused compute capacity, as GridEcon proposes.

Simplified Pricing — There is a natural cycle at work in market ecosystems, where fixed pricing drives the introduction of pay-per-use; pay-per-use becomes increasingly complex; and this may drive a return to simplified plans. In telephony, pricing started out as fixed (per-line), then became metered pay-per-use (per minute), then became simplified via tiered usage plans such as AT&T Digital One Rate. "Every day low pricing" and "all-inclusive" packages can replace dynamic pricing and pay-per-use. "Loss aversion" plays a role: Sometimes you'd rather pay slightly more for a fixed price plan than take the chance of paying a lot more should usage spike on a pay-per-use plan. Of course, when the spike is revenue-generating or otherwise beneficial, Cloudonomics tells us that total cost can be minimized by judiciously leveraging pay-per-use pricing.

Additional possibilities exist, for example, program trading, risk management, trusted third-party evaluation and reporting, and rollover minutes, and causal chains are bound to happen — perishable capacity leads to dynamic pricing which begets long-term hedges.

Joe Weinman is Strategy and Business Development VP for AT&T Business Solutions.



GigaOM / Sun, 13 Dec 2009 17:00:43 GMT

Sent from FeedDemon

2009年12月15日火曜日

ホスティング業者、Peer 1がEMC社のAtmosクラウドストレージをOEM提供。データセンタ経営者でもクラウドをアウトソーシングするその理由は?

いくつか理由が考えられる。

1) クラウドストレージはマージンが薄い事業。自社インフラに今から投資したのでは採算が取れない

2) 長期的視野でストレージサービス事業を見たら、専門家に任せる方が得策

3) ストレージクラウドの上に乗せる付加価値サービス事業にフォーカスを置きたい

4) ストレージサービスは想像以上に技術的なノウハウが必要とされる

Peer1に関しては、このあたりが理由だと考えられる。

Peer1は全米、カナダ、としてヨーロッパに5箇所以上の大型データセンタを持つホスティング事業者の中堅。  こんな会社でもストレージサービスはアウトソースする、と言う判断をすることについては、日本の事業者も分析をすべきと思う。


Peer 1 setting up Atmos cloud storage

Hundreds of terabytes

Hundreds of terabytes of EMC's Atmos storage are going to be used by Peer 1, a North American hosting company, in a cloud storage service.

Atmos is EMC's dedicated storage platform for storage in the cloud as opposed to direct-attach storage (DAS), network-attached storage (NAS), and storage area network (SAN) arrays.

Peer 1 Network Enterprises Inc is headquartered in Vancouver, Canada, and has 16 data centres in 13 locations in the USA and Canada and one in the UK, at Southampton. It offers managed hosting, dedicated hosting and co-location services.

The company was founded in 1999 in Canada and expanded into the USA through acquisions of ServerBeach in 2004 and the managed hosting assets of Interland in 2005. The UK data centre opened in April this year with three rack cabinets. It now has 54.

Peer 1 intends to offer a storage in the cloud service to its customers and is buying hundreds of terabytes of EMC Atmos capacity to be installed in two of its largest US data centres, chosen from Atlanta, Miami and San Antonio. These Atmos facilities will be incorporated in its SuperNetwork and the cloud storage service should go live in the first quarter of next year, probably in March.

The selection process took six months. Fabio Banducci, Peer 1's president and CEO, said: "Strategically, we're not a research and development shop - we're looking for best of breed." This means Peer 1 did not want to buy-in raw components and build a cloud storage platform itself.

Peer 1 did technical diligence on six potential suppliers, including 3PAR, ByCast, EMC, Nirvanix and Parascale. Then it looked more closely at a short list in a business diligence exercise, with EMC being the winning supplier. Banducci said: "EMC had great commitment plus a storage background."

Why did Peer 1 choose EMC? Although EMC is the vendor for Peer 1's SAN offerings, "EMC for our cloud storage was not a shoo-in".

Banducci said the other firms, although they had promising early-stage technology, were generally newer in business than EMC, and less well funded, meaning they were riskier choices. EMC was also very committed to Atmos developments: "EMC is not just limited to storage... there are well north of 400 people in this [Atmos] division. It's spending millions of dollars a week on R & D for its cloud platform... It spends more R&D dollars in a week than 90 per cent of the hosting companies that choose to do it themselves... EMC made rapid progress over the project selection period. The roadmap is really impressive."

If all goes well the number of Peer 1 data centres fitted with Atmos gear will increase and it may even be the case that Peer 1 will offer Atmos compute services in the cloud.

In May EMC launched its own Atmos onLine service and AT&T introduced its Atmos-based Synaptic Storage as a Service. Since then little or nothing has been heard of other Atmos customers. Banducci said EMC in fact does have other Atmos customers, but neither EMC nor its customers are ready to go live yet.

He did say: "We'll be the first hosting provider to go live with Atmos." ®

http://www.theregister.co.uk/2009/12/14/peer1_atmos/

Rackspace Hosting社がパートナー制度を強化:自社ホスティング、クラウドサービスをパートナー経由で事業展開

ホスティング業者でありながら、そのホスティング業を商品として、再販できるモデルを作ろうとしている姿勢は意外と新しく、今後これが成功するのであれば、ホスティング/クラウド事業の新しいビジネスモデルとして広がっていく可能性が高い、と考えられる。 

データセンタを高額な投資で建設しなくても、クラウド事業を自社ブランドで提供できる、という事は日本国内のSI事業者にとって大きな魅力になるのでは、と思います。


Rackspace Enhances Partner Network and Introduces Four-Tiered Structure

(WEB HOST INDUSTRY REVIEW) -- Hosting and cloud computing solutions provider Rackspace Hosting (www.rackspace.com) has restructured its global partner network to help channel partners grow their business, better serve customers and give them a competitive advantage.

"We regard partners as an extension of the Rackspace family and have updated the Rackspace Partner Network to help ensure they receive the services and support necessary for long-term success," Rackspace worldwide channel vice president Robert Fuller said in a statement. "Rackers worked closely with our partners to develop a program that creates the biggest impact and provides the best experience for them, which we believe will in turn impact the broader customer experience."

According to Rackspace's Wednesday announcement, the latest updates to the Rackspace Partner Network are in part the result of partner feedback, and includes a four-tiered structure that gives partners greater opportunity for revenue, training and marketing tools. Rackspace's four levels of commitment, Platinum, Gold, Silver and Member, are designated on company size and level of sales. 

Rackspace is also offering partners aggressive commissions and, at certain partnership levels, will even pay commissions on upgrades and renewals. As companies move along the channel scale from Member to Platinum Partner, Rackspace offers additional benefits including access to a dedicated channel manager and Rackspace's searchable partner database, as well as the opportunity to participate in joint marketing activities, Rackspace customer and partner base promotions, and added discounts and commissions on upgrades and renewal contracts.

At any level of membership, however, Rackspace enhanced partner program includes several features available for partner companies, including sales representative access, sales and technical training, OnBoarding support, and access to the Rackspace Partner Network portal with exclusive marketing, sales and industry information, as well as networking opportunities with other Rackspace partners.

Sitecore (www.sitecore.net), which provides web content management and portal software for organizations, recently joined the Rackspace Partner Network. "Sitecore has built its foundation on providing partners and customers with the best Web CMS platform, so it's a natural extension to partner with Rackspace for its unmatched service and reliable hosting solutions," Sitecore client and partner engagement vice president Jason Crea said in a statement. "Performance, security, scalability and price are critical elements for our customers in building and hosting websites, blogs, and other Web applications. Based on Rackspace's past successes we anticipate a valued experience for our joint customers."

http://www.thewhir.com/web-hosting-news/121009_Rackspace_Enhances_Partner_Network_and_Introduces_Four_Tiered_Structure?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+thewhir+%28theWhir.com+-+Daily+Web+Hosting+News%29

クラウドのストレージ:RAID派とiSCSI SAN派に実は分かれる?それぞれの優劣も比較

Cloudscaling
December 13, 2009 1:09 PM
by admin

Virtual Server vs. Real Server Disk Drive Speed

It's important to understand the potential differences between virtual server disk drives and physical disk drives, so I wanted to post a very brief blog on the topic.  For this article I've chosen to compare the performance of an iSCSI SAN on Gigabit Ethernet to a single SATA disk drive.  The reason for this is two-fold: first, it more starkly highlights the relative performance differences between purchasing say a single dedicated server in a hosting environment with a single disk or a virtual machine hosted in a cloud environment.  Secondly, when you are looking at internal private clouds or a lot of the newer cloud offerings, they are commonly built using an iSCSI SAN backend.

To be clear, the top three U.S. clouds do not use iSCSI SANs: Amazon's EC2, Rackspace Cloud, and GoGrid, all use local RAID subsystems.  This is common knowledge.  Of the early cloud pioneers, as far as I'm aware, mostly the U.K.-based clouds such as ElasticHosts and FlexiScale use iSCSI SANs.  The latest set of new cloud entrants, such as Savvis, Terremark, and Hosting.com all use either iSCSI or Fiber Channel-based SANs.  This is also commonly known.

Your Mileage May Vary on these performance numbers.  I'm not trying to highlight any 'right' way to build a cloud here.  I'm simply trying to show what the difference in performance is between a single SATA disk and a VM disk drive backed by an iSCSI SAN over a single Gigabit Ethernet.

This is not a robust performance and benchmarking analysis.  It's a simple "run the numbers and compare" blog posting.  These are by no means authoritative performance numbers and that's not their purpose either.  Their purpose is to highlight how performance differs between a single spindle and many in a RAID configuration, even when that RAID is available via a SAN over Gigabit Ethernet.

Please avoid overly critiquing the testing technique here.  It's not meant to be robust, so nitpicking it serves no purpose.

Setup & Methodology
This is a very simple test in the Cloudscaling hosting & cloud lab environment.  Both servers running the test are on latest Ubuntu Jaunty Jackalope release.  One is a physical server with a single SATA disk and the other is a VMware vSphere VM backed by an iSCSI LUN.  The iSCSI LUN is provided by a ZFS-based SAN product called NexentaStor from Nexenta Systems.  This is an OpenSolaris derivative and a very cost effective alternative to say a NetApp or EqualLogic system.

The iSCSI SAN hardware is a simple Sun x2200 M2 with a Sun J4200 JBOD and 6 15K RPM SAS drives.

The bonnie++ command line was as simple as possible:


bonnie++ -n 512


Note that the simplicity of the bonnie testing method may have caused some weird skewing of numbers.  See below for more.

Basic Numbers
Here is a basic high-level chart showing the numbers.

Figure 1. High level of SATA vs. VM disk

Figure 1. High level of SATA vs. VM disk

The first thing you will notice, of course, is the two big spikes for sequential and random file reads.  These numbers are artificially inflated as clearly 325,000 IOPS for sequential and 460,000 IOPS for random reads are ridiculous.  This is likely due to caching either in the OS or the controller on the physical box.  bonnie++ is supposed to account for this, but for some reason, in this instance it did not.  So it might be a little easier to evaluate the relative performance on a logarithmic scale:

Figure 2. Logarithmic Scale for High Level Results

Figure 2. Logarithmic scale for test Results

Much better.  What is easier to notice here is that the VM generally performs better on both standard measures of disk speed: raw throughput and disk operations (I/O per second or IOPS) with the obvious exception of the two aberrant data points.

Removing those two data points will give us an even clearer picture:

Figure 3. Normalized test results

Figure 3. Normalized test results

Great.  Now this is very clear.  As you can see, the first half of the chart shows raw throughput (Kbytes/second).  When reading blocks from the VM disk we're nearly saturating the gigabit ethernet link which should top out at 125Mbps theoretical, and we're hitting 107MBps on average over 10 runs, so this is quite acceptable.  The SATA disk, in comparison gets just over 60MBps, which is about right, even though the SATA spec and controller are capable of more.  Sustained block reads from SATA disks will typically be 60-80MBps in the real world.

Much more interesting is the number of IOPS.  Many real world disk workloads, like a database spend the majority of their time doing large amounts of their 'seeking' from one position of the disk to another, meaning lots of random file access.  They will bottleneck on waiting for the disk 'head' to move from one position to another on a disk drive and read new data.  It's hard to tell the difference above because the SATA disk is so slow it barely registers on the chart.

If we change to a logarithmic scale again the data becomes much easier to read:

Figure 4. Normalized logarithmic scale test data

Figure 4. Normalized logarithmic scale test data

Now you can see that doing random seeks (i.e. moving the head of the disk drive from one location to a new one to read a piece of data) are starkly different.  A single SATA disk gets about 185 IOPS while a set of 6 SAS disks in the SAN is right around 10,000 IOPS.  This is a huge performance difference.  There are several reasons for this.  One, a typical SATA disk has an average latency of 8.5ms and a 15K SAS disk has only 3ms.  Also, with 6 disks in a RAID configuration, I have 6x more disk heads to read with.

It's still a bit hard to see with this chart, but for most of the rest of the IOPS tests above, the SAN solution is roughly 3x the performance of the single disk.  For example, Sequential File deletion is 2,573 (SAN) vs. 840 (SATA).

Rather than going through the entire set of results, I recommend you download my simple spreadsheet.

Note that for Amazon, Rackspace, or GoGrid, local VM disk results will likely look very similar to the iSCSI SAN results for IOPS and sequential read/write (first half of chart) will be much higher.

Amazon's Elastic Block Storage (EBS) would have similar performance characteristics to the iSCSI SAN above and hence you can see why it can be acceptable for running a database.

Summary
My point here is very simple.  I want to highlight the difference between purchasing a dedicated server with a single (or small number of) SATA disks vs. going with a cloud solution that uses a shared iSCSI SAN or local RAID on a single physical node.  Purchasing your  own dedicated server solution with a RAID can be extremely costly compared to a similar cloud solution.

More importantly, for those workloads that require random I/O and file access, like database applications, RAID is clearly a winner.  That's why using a shared RAID (via an iSCSI SAN or a local RAID) on a physical node for your cloud VM can be a clear advantage of the cloud today.

Amazonが"Spot Pricing" と称してAWSのリソースをオークション形式で提供すると発表

実にAmazonらしい発表だが、エンタプライズユーザにとって嬉しい事なのかどうか不明。



Amazon creates cloud computing spot market

Posted by Larry Dignan @ 3:33 am

Amazon on Monday rolled out spot pricing for cloud computing so customers can buy capacity at any price on the open market.

The concept is an interesting one since Amazon Web Services is making computing capacity available on the market just like any other commodity (see Amazon statementWerner Vogels and Amazon Web Services blog).

Dubbed Spot Instances, Amazon customers can bid on unused Elastic Compute Cloud (EC2) capacity and run those instances as long as their bid exceeds the spot price. The rub is that you can be outbid.

In a statement, Amazon says Spot Instances "are well-suited for applications that can have flexible start and stop times such as image and video conversion and rendering, data processing, financial modeling and analysis, web crawling and load testing."

Amazon CTO Werner Vogels said on his blog:

The central concept in this new option is that of the Spot Price, which we determine based on current supply and demand and will fluctuate periodically…This gives customers exact control over the maximum cost they are incurring for their workloads, and often will provide them with substantial savings. It is important to note that customers will pay only the existing Spot Price; the maximum price just specifies how much a customer is willing to pay for capacity as the Spot Price changes.

Overall, the flexibility can save money, but you may not get a defined time—at your price—for a project to finish. Spot Instances allows you to define a maximum price you'll pay along with instance family, size and region. Spot Instances can be terminated when they are no longer needed.