2008年12月19日金曜日

Will There be Thousands of Cloud Providers?

Gartnerの開催した、Data Center Conferenceのキーノートで出たコメント:
「将来的には、市場はかなりの数のCloud Computingソリューションでいっぱいになる。  今日のように一つの企業(Amazonのことを指している)がCloud Computing市場を代表するのではなく、多くのCloud Computingソリューションが登場し、ユーザはをそれを組み合わせて自社に向けたソリューションを作り上げる必要が出てくる。 」
 
また更に、
「多くの企業は5年後には、こういったCloud Providerから提供されるさまざまなソースを最適化された形で組み合わせることに責任を持つ部署を組織化する。」
と予測している。 
 

As Internet titans Amazon, Microsoft and Google build enormous data centers to support their cloud computing operations, there's been much discussion about what the cloud will look like. Will cloud computing be dominated by a handful of companies with the resources to build massive server farms?

Gartner laid out a very different vision in this morning's keynote at the Gartner Data Center Conference in Las Vegas. Gartner VP Thomas Bittman predicted that cloud computing will eventually support thousands of specialized providers, creating the need for a cottage industry of specialists to assemble client solutions from a smorgasbord of cloud offerings.

"In the future, we expect to see thousands of providers in the cloud," with services being "put together like Lego blocks," Bittman predicted. "We're moving from this monolithic, 'one provider does everything' model to an ecosystem. We're moving toward a more distributed, open world, and toward more customized services. We believe there will be a large number of mid-sized providers."

2008年12月5日金曜日

Building EMC Atmos

EMC社が自社のCloud Computingの戦略の一つとして、Atmosという名前で発表を行っている。
 
これは、大規模なデータ管理をEMC社が運営しているCloud Computingインフラを使って行う、というもので、COS (Cloud Optimizsed Storage)と呼ばれる、SANやNASとは異なるコンセプトとアーキテクチャで構成されている。 
 
COSで管理されるデータはオブジェクトと呼ばれ、ポリシーで管理される。  オブジェクトはペタバイト級の大きさで、LUNやRAIDのようなSCSIインタフェースに必要なデータもなし。  COSで管理されるのは実態であるオブジェクトとメタデータのみ。  オブジェクトへのアクセスはSOAPやRESTを経由して行われる。 
 
物理的にオブジェクトがどこにあるかはまったく関係なく、統一された管理コンソールインタフェースでオブジェクトの管理が行われる。 
 
 

EMC Atmos is EMC's first Cloud Optimised Storage offering designed for policy based information storage, information distribution and information retrieval at a global scale. GA code shipped at the end of June and customers and partners have been deploying Atmos repositories in their own environments since the second half of 08.

While some competitors were flapping their gums and asking whatever crazy questions came into their heads EMC was shipping a product whose team didn't miss a single milestone and met their ship date. Now that the marketing machine has spun up and the EMC Sales sledgehammer is about to drive those competitors into the ground I'll be following their backtracking with some enthusiasm.

So what is EMC Atmos? What Atmos isn't is a clustered file system or a warmed over NAS offering clustered or otherwise. Atmos(phere) was designed by the Cloud Infrastructure and Services Division (CISD) from the ground up with a number of distinct characteristics.

  • Information inside the Atmos repository is stored as objects. Policies can be created to act on those objects and this is a key differentiator as it allows Atmos to apply different functionality and different service levels to different types of users and their data. Managing information, which is what we should be doing, as opposed to wrangling blocks and file systems as we tend to do.

  • There is no concept of GBs or TBs to EMC Atmos, those units of storage capacity are too small, Atmos is designed for multi-Petabyte deployments. There are no LUNs. There is no RAID. There are only objects and metadata.

  • There is a unified namespace. Atmos operates not on individual information silos but as a single repository regardless of how many Petabytes containing how many billions of objects are in use spread across whatever number of locations available to who knows how many users.

  • There is a single management console for management regardless of how many locations the object repository is distributed across. This global scale approach means that Atmos had to be an autonomic system. Automatically reacting to environmental and workload changes as well as failures to ensure global availability.

COS What those traits should highlight for you is that Atmos isn't a SAN offering isn't a NAS offering and neither is it a CAS offering. It's a COS offering, cloud optimised storage, with web services such as SOAP and REST for access.

By now there's a lot of info on Atmos on the various blogs and up on EMC.com but this entry is about "Building EMC Atmos" and for that information I went to one of the Atmos architects. Dr. Patrick Eaton.

Patrick Eaton received his PhD from Berkeley and was one of the primary members of Professor John Kubiatowicz OceanStore project. As I learned from speaking to him he's been thinking about stuff like this for a number of years and if he wasn't building globally distributed storage systems he'd be indulging his passion for music working in the field of digital sound for a company like Yamaha or Korg.

With a tinge of regret he tells me that these days he's more of a consumer than a creator of music but as he's been busy building something new from the ground up that's understandable.

In person he's taller and younger than I had expected, he smiles easily and comes across as an open personality. Clearly not one of these academic types who had their sense of humour surgically removed before they submitted their thesis.

As I was to learn Atmos started with five people out at the EMC Cambridge facility working on it's floor to ceiling whiteboards looking to solve a problem.

"Fundamentally this was a distributed systems problem. How do you take a loose collection of services distributed across a wide area and make them operate as you want them to operate?"

Fortunately for me this isn't a question I have to answer or I'd need more than the floor to ceiling whiteboards but he pauses for a split second before moving on.

"EMC is really good at selling high end storage to really high end people. If you can drop tens of dollars per GB on a storage system man does EMC have offerings for you but data growth is continuing to explode and not everybody has data which justifies that level of expenditure or has the financial resources to justify spending that much money on storage. So, EMC was coming across a customer segment for whom they didn't have an offering and the goal for Atmos was to provide a low cost bulk storage system for these emerging markets, like Web 2.0 companies or other industries with lots of user generated content.

Yes you can put that stuff on regular SAN or NAS systems and that's what customers have been doing as the only other option was to start writing and maintaining their own storage software and build their own storage hardware. That's far from ideal as the value of these companies is in their applications and the services those applications provide. 

What we needed to do was provide a Terrabyte at something like ten or more times cheaper than existing SAN or NAS storage systems can offer. That is the problem Atmos was designed to solve and a key part of the product vision comes from the policy driven features of Atmos. Yes you're targeting the bulk storage market, the TME and Web 2.0 spaces with those mountains of user generated content, but people want to use that storage in very different ways. Some people want to have one data centre, some want two others want many more. Some need to support different types of workloads, various types of object sizes, control where they locate specific objects and how they get them close to their customer regardless of where on the planet the customer is located in relation to where the data was first stored.

EMCAtmosArchitecture At the core of the Atmos design is how we allow customers to define policies as to how data actually hits disk. There are no administrators saying "Joe's photos should be on this particular piece of spinning rust", rather they write policies to describe how Joe is a subscription customer therefore his files require a certain number of copies associated with them for backup and should have a certain rolling retention policy in case he cancels his account. Thus they should be in this data centre here and not in one thousands of miles away.

But if Joe packs up the family and the dog and moves across country his data may be replicated to the data centre now closest to him depending on the policies applied to his files.

Information management is something EMC talks about a lot so providing a storage solution designed with policy based information management at it's core is a big thing we wanted to do with Atmos. You're not just storing information, you're replicating it to where it's needed and putting it as close to the user as possible. You're compressing it, de-duplicating it or deleting it depending on what policies are applied to it and if it hasn't been accessed in a while you can even spin down the drives inactive objects are stored on to save power.

Multi-tenancy, could we talk about that a bit more? Could I offer storage as a service to different users or organisations?

"Yes you could. Multi-tenancy means that Atmos can support many different tenants with logical isolation. Each tenant can have their own private namespace under the Atmos namespace but tenants are not aware of other tenants or the objects belonging to those tenants.

You could be providing services to users out on the Internet and hosting application test and dev as well as providing services to your internal business units, but none of those tenants would know about each other."

We were talking about this being a low cost solution, what's low cost at the scale we are talking about here? Sure there's capacity cost but it's not just that..

"Well not only does the initial cost of delivering the product to the doorstep have to be low but also it has to be something that the customer can maintain very easily and we're talking about the Petabyte range when we're talking about deploying this so one of the key design elements was how to provide a customer installable configurable and maintainable implementation.

Going back to the traditional EMC model of "We'll make sure it works but you're going to pay for it", where parts show up at your door with a service engineer attached well that shoots the entire low cost target out of the water if you have to do that more than a few times a year. That's why a lot of the installation, configuration and maintenance can be done by the customer themselves.

Low cost, low touch, incredible scale and density. Billions of objects globally distributed with policy based information management. Petabytes of storage which could be in the same room or distributed around the world but with a single point of management. Those were some of the design goals." 

Okay so you've built and shipped Atmos, we're were talking about having this pre-announcement chat back when you were just about to head off on holiday this past summer right after the code went GA, so what have you learned from building a product as opposed to working on a project?

"I learned a lot about managing cross continent teams. Maybe 50% of our developers and 80% of our QA is split between Beijing and Shanghai China. That's a 12 hour difference which can be challenging since there's no overlap during the day and there are cultural communication differences to factor in.

When the group was smaller I was exposed more to customer interactions and it was always interesting to get feedback and find out how they plan on using Atmos as opposed to how you think they'll use it. Now it's up and running in their environments I get a different kind of feedback as I'm watching how they're actually using the product in production.

I was also blessed to join this group when there were five of us. I've been able to grow with the group and assume some responsibility and some leadership which has stretched me and it's a stretching that a lot of freshly minted PhDs don't get so early on in their career. It was pretty natural when there was five people here and maybe ten over there that I could take well defined pieces of the system and then lead them through implementation. Now that we've grown to over a hundred people you can't take the people who've been there the longest and have them doing that.

I've been really blessed that way and really fortunate to have been able to join an organisation in it's infancy and be able to grow with the organisation. The opportunity here has really been amazing."

You moved from California to Massachusetts to join EMC and build Atmos from the ground up how did the move to the east coast turn out for you?

"We love it here. My wife and I are from the mid-west, which does have winters, so the seasons have made a welcome return. California has beautiful weather but it can start to feel like Groundhog Day while here the seasons are refreshing. The city is nice and I tell my manager all the time that we need to recruit more in California as there's not a whole lot of places you can draw from in the US and with a straight face tell them that Boston has more affordable houses and better commutes.

Californians you can say that to and it's true."

2008年11月22日土曜日

Will Investors Press Sun to Make a Deal?

Sun Micrososystemの経営問題について、会社を分割して売却する可能性が高いとのコメントを出しているインベストメントバンカーが増えている。
 

Could investors force Sun Microsystems (JAVA) to seek an acquirer or sell off units? That's not a new speculation, but one that's examined at length in a Reuters story yesterday, in which investment bankers say that "the challenge of valuing Sun's intertwined software, hardware and services businesses could put off potential buyers" such as rivals HP, IBM and Dell.

The hook for the Reuters story notes is the news last month that investment firm Southeastern Asset Management has increased its stake in Sun to 21 percent and said in an SEC filing that it ""will have additional conversations with management and/or third parties, regarding opportunities to maximize the value of the company."

What's likely to happen? The Reuters analysis concludes that it might be easier to sell of a unit, such as StorageTek, than find an acquirer interested in buying all of Sun. But Reuters also acknowledges that some of its sources are investment bankers who might benefit from asset sales.

2008年11月11日火曜日

Cloud Economics: Microsoft, Google & Amazon | ITworld

Microsoft, GoogleとAmazon社のそれぞれの収益を面白い視点から比較している記事。
 
従業員あたりのGross Profitを比較すると、Amazon社が他2社と比較して著しく低い事がわかる。 これはAmazonのRetail事業がソフトウェア事業と比較して非常に小さい利幅でビジネスを行っている事に大きく依存する。 
 
 

Cloud Economics: Microsoft, Google & Amazon

Yesterday Microsoft launched its Azure cloud platform, so it is time for another spreadsheet. To properly compare Microsoft, Google and Amazon, I am using the gross profit (instead of revenue) and net profit numbers. Gross profit is, in some sense, the real revenue of a company after paying its outside suppliers; gross profit is what is available to pay its employees, pay the rent, and so on. For a software company, the cost of goods sold is close to zero, so most of the revenue is gross profit. But for a retailer, as much as 70-80% of revenue goes to its suppliers, so gross profit is the better measure of the economic productivity the company achieves. The numbers below use rough annualized estimates based on the most recent quarter.

Company Annual Gross Profit in USD Net Profit in Billions USD Employee Count Gross Profit/Employee in Thousands USD Net Profit/Employee in Thousands USD
Microsoft 48 17.68 91000 527 194
Google 12 5 20123 596 248
Amazon 4 0.8 20500 195 39

Do you notice the dramatic difference? Google and Microsoft are in another planet altogether compared to Amazon. Google has practically the same headcount as Amazon, yet drives three times the gross profit. The numbers really illustrate Amazon's competitive strategy in cloud computing; to quote Nick Carr:

Bezos goes on to note that Amazon's retailing operation is "a low gross margin business" compared to software and technology businesses, which "tend to have very high margins." The relatively low profitability of the retailing business gave Amazon the incentive to create a highly efficient, highly automated computing system, which in turn could become the foundation for a set of cloud computing services that could be sold at low enough prices to attract a large clientele. It also made a low-margin utility business attractive to the firm in a way that it isn't for a lot of large tech companies who are averse to making big capital investments in new, low-margin businesses.

"On the surface, superficially, [cloud computing] appears to be very different [from our retailing business]," Bezos sums up. "But the fact is we've been running a web-scale application for a long time, and we needed to build this set of infrastructure web services just to be able to manage our own internal house."

Microsoft's announcement is interesting from a technology point of view, but it is hard to see how the economics would work for them against Amazon. It is very hard for companies to go down the value chain for growth, so I am skeptical Microsoft would easily accept Amazon-like margins. On the other hand, for Amazon, cloud services have to deliver only a little higher margin than retail to be well worth the investment. That is not a tough hurdle, because retail is one of the toughest businesses out there.

2008年11月4日火曜日

Information Technology -Rackspace, Persistent Systems Partner for 'Total Stack' SaaS Offerings

Persistent Systems社と呼ばれる上流コンサル、SIがRackspace社とパートナー提携を行い、さまざまな業務提携を行う事を発表した。
 
Persistent Systems社はインドのオフショアSIで、Intel Capitalの出資を受けたして、急成長している企業。  Rackspaceの提供するさまざまなホスティング事業にPersistent社のアプリケーション開発サービスを組み合わせて、統合された顧客業務の開発/運用サービスを提供することを狙いとしている。 
 
SIビジネスとのパートナーシップがホスティング業界の顧客開拓の一つの戦略になりそうな気配である。
 
 

Rackspace, Persistent Systems Partner for 'Total Stack' SaaS Offerings

Persistent Systems, a vendor of Outsourced Product Development services, and Rackspace Hosting, a vendor of hosted IT services, have entered into a partner agreement where both companies make available each other's product, "ultimately creating a total product stack for customers — from managed infrastructure to software development and enablement for Software-as-a-Service (SaaS (News - Alert)), Web applications, data migration and application support," Persistent officials said
 
Through this relationship, Rackspace officials said, the company can recommend Persistent's full array of software development services to customers, who can use them over the life of their applications and products. This product partner agreement comes on the heels of a successful outsourcing product development relationship between Rackspace and Persistent.
 
"Rackspace had never worked with an offshore vendor before," said Tony Campbell, director of the software development division at Rackspace Hosting, in a statement. "We engaged Persistent in order to see how the model would work."
 
Rackspace officials said they use Persistent's expertise on a variety of projects, and avail themselves of "the knowledge of Persistent's people… its data warehouse and Java competency centers."
 
Last year a customer survey conducted by Rackspace found that nearly 36 percent of responding SaaS customers do not know the uptime guarantees provided in the SaaS vendor Service Level Agreement although, the survey found, "security, application uptime and network connectivity are among their top technical concerns."
 
The survey also concluded that 49 percent of enterprise Software as a Service (SaaS) customers do not know where the infrastructure behind their SaaS application lies, whether it is hosted internally with the SaaS provider or through a third-party hosting provider.
 
John Engates, chief technology officer, Rackspace Managed Hosting, said at the time "SaaS providers need to clearly communicate their hosting and infrastructure details in the Service Level Agreement, drilling down to security promises, uptime guarantees, network connectivity, data backup processes and more. This way, customers are aware of their SaaS provider's service obligations, and they can rest assured their mission-critical applications such as e-mail or Customer Relationship Management software will perform as promised."

GE Will Use SaaS To Manage 500,000 Suppliers -- Software As A Service -- InformationWeek

SaaSがエンタプライズ業界で認識され始めている事例として、GEが自社のサプライチェーンの情報管理をSaaS化している案件を紹介している。 
 
このアプリケーション、50万件のサプライヤー情報をAravo社が提供するSaaSアプリケーションにアップロードして、運用/管理している。  この情報を30万人のGE従業員が6ヶ国語でアクセスをしており、かなりのコスト削減を実現できる、との事。
 
Aravo社はSan Fraciscoにある小さなベンダーであるが、必要十分の機能を提供できた事と、7ヶ月でシステムを運用にまでもっていった事が評価された、との事。 
 
SaaSアプリケーションのSIはきぼが小さくなるため、大規模ユーザに対して小さなベンダーがこのように食い込んで成功する可能性を秘めている、という点が興味深い。
 
 
 
 

GE Will Use SaaS To Manage 500,000 Suppliers


The electronic firm's partnership with Aravo's Web-based software service is the latest example of how even the largest of companies are getting comfortable with SaaS.
 
GE is using a small software-as-a-service company for an online repository of information on 500,000 global suppliers. It's an example of how even the very largest of companies are getting more comfortable with having their critical business data managed in a SaaS environment.

GE, with revenues last year of $172.7 billion, made Aravo Solution's supplier-information management service available to employees in mid-October. It's available in six languages to any of GE's 300,000 employees worldwide involved in the supply procurement process, including workers that order office supplies, managers that negotiate large commodity purchases, and legal staffers that ensure suppliers are in compliance with local laws and taxes. Features, functions, and information access vary depending on user roles.

The system is expected to bring GE "significant cost savings, while improving data accuracy, compliance and productivity," said GE CIO Gary Reiner in a statement issued Wednesday. GE established a data repository of supplier information more than 10 years ago, but it wasn't built for widespread access via the Web. The Aravo service will be used across GE, including GE Capital, NBC Universal, and numerous companies it runs in such areas as appliances, aviation, energy, and healthcare.

GE chose the small San Francisco company's software service because the application's features fit what it needed in a supplier information repository, and because it lets suppliers update their own information over the Web, said Thomas Hattier, an operations manager in GE's Corporate Initiatives Group, in an interview. That includes updating account contact information and completing online forms to prove that a supplier has complied with local laws. Aravo uses computer-hosting company Rackspace to run its system.

While the deal could be viewed as a win for cloud computing, GE didn't select Aravo out of a new great appreciation for the trend, Hattier said, adding that "cloud computing is getting cloudier" because of all the IT vendor hype.

"We certainly are large enough to host and mange Web-based solutions ourselves. We don't need to go outside to do that," he said.

But GE did appreciate that Aravo was fast to implement -- seven months from start to finish -- a function of its being a service rather than an onsite software deployment. The bulk of the work was transferring supplier information from GE's legacy repository into Aravo.

GE is working with Aravo to integrate the supplier information repository with its own requisition workflow processes, which are run by onsite software. Some but not all of the work is done: "We'll be doing workflow integration through the rest of year into a variety of different places," Hattier said.

Some suppliers still need to update their information in the system, he added. And while GE has found a handful of suppliers out of 100 countries that resist a Web-based approach to updating and validating their information, most are embracing it.

Big Day for Amazon EC2: Production, SLA, Windows, and 4 New Capabilities

Amazonが自社のサービスに関していくつかの発表を行った。 
 
特に大きい話題は、EC2のベータが終了し、本格運用を開始した、ということ。 
これは同社が正式かSLAを提供する事も意味し、AWS上で付加価値を提供していたCloud Computing企業にとっては大きな影響を与えることになる。 
 
また、Windows Server 2003/2007の環境や、SQLの環境なども初めて提供する事も発表され、ISV業界にとって大きな前進となる。 
 
今後のこのようにして既に広い市場を更に広げるための機能強化を続ける戦略をリテール業界にたとえてWalmartに非常に似ている、と評価しているアナリストもいる。

 

Big Day for Amazon EC2: Production, SLA, Windows, and 4 New Capabilities

My colleagues and I have spent the week building up anticipation for this post on Twitter. After you read this post I am sure that you will agree that the wait was worthwhile.

The hallways at Amazon have been buzzing with excitement of late. After working for years to build and to run our line of highly scalable infrastructure web services we are happy to see that developers large and small are putting them to good use.

Here's what's happening today:

  • Amazon EC2 is now in full production. The beta label is gone.
  • There's now an SLA (Service Level Agreement) for EC2.
  • Microsoft Windows is now available in beta form on EC2.
  • Microsoft SQL Server is now available in beta form on EC2.
  • We plan to release an interactive AWS management console.
  • We plan to release new load balancing, automatic scaling, and cloud monitoring services.

Let's take a look at each of these items in turn.

Production - After a two year beta period, Amazon EC2 is now ready for production. During the beta we heard and responded to an incredible amount of customer feedback, adding support for powerful features such as Availability Zones, Elastic Block Storage, Elastic IP Addresses, multiple instance types, support for the OpenSolaris and Windows operating systems, and (as of today) a Service Level Agreement. Regular EC2 accounts are allowed to run up to 20 simultaneous instances. Requests for hundreds and even thousands of additional instances are granted all the time and can be made here.

SLA - The new EC2 Service Level Agreement works at the Region level. Each EC2 Region (there's only one right now but there will be more in the future) is divided into a number of Availability Zones. The SLA specifies that each Region will be available at least 99.95% of the time. Per the SLA, a Region is unavailable if more than one of its Availability Zones does not have external connectivity.

Ec2_windows_yeah Windows Support - Beta level support for Microsoft Windows is now available on EC2, in the form of 32 and 64 bit AMIs, with pricing starting at $0.125 per hour. Microsoft SQL Server is also available in 64 bit form. All of the powerful EC2 features listed above can be used with the new Windows instances and we'll be adding support for DevPay in the near future.

Once launched, the Windows instances can be accessed using the Windows Remote Desktop or the rdesktop client. I've spent some time using Windows on EC2 and it works really well. I used the EC2 command line tools to launch a 32 bit instance, opened up an additional port in the security group, and then logged in to it using Remote Desktop.

 

We'll be running Windows on EC2 at next week's PDC in Los Angeles, so be sure to stop by and to say hello if you are there. Rumor has it that we'll be giving out a really cool badge to the people who stop by our booth.

RightScale founder Thorsten von Eiken has written up a helpful post which outlines the differences between Windows and Linux with respect to launching, accessing, bundling, and using the Elastic Block Store. He also describes current and planned support for Windows in their products.

Elasticfox_gsg We've updated ElasticFox with a number of new features, including direct access to EBS and Elastic IP addresses from the main tab, one-click AMI bundling on Windows, better key and security group management, and the ability to directly launch Remote Desktop sessions. There's also a brand-new (and very helpful) ElasticFox Getting Started Guide.

We are looking forward to seeing how our customers will put Windows to work. We expect to see ASP.Net sites, media transcoding, HPC (High Performance Computing), and more. I've talked to a number of developers who will deploy hybrid web sites using a mix of Linux and Windows servers. This really underscores the open and flexible nature of EC2.

We are also planning to offer some new capabilities in 2009 to make managing cloud-based applications even easier. As usual, we'll start with a private beta and you can express your interest here.

Management Console - The management console will simplify the process of configuring and operating your applications in the AWS cloud. You'll be able to get a global picture of your cloud computing environment using a point-and-click web interface.

Load Balancing - The load balancing service will allow you to balance incoming requests and traffic across multiple EC2 instances.

Automatic Scaling - The auto-scaling service will allow you to grow and shrink your usage of EC2 capacity on demand based on application requirements.

Cloud Monitoring - The cloud monitoring service will provide real time, multi-dimensional monitoring of host resources across any number of EC2 instances, with the ability to aggregate operational metrics across instances, Availability Zones, and time slots.

Amazon CTO Werner Vogels has done a very nice job of explaining why services of this type are needed to build highly reliable and highly scalable applications. His blog is a must read for those interested in cloud computing. Werner has spent so much time talking about AWS of late that I've asked him to be an honorary member of my team of AWS evangelists!

I think it is important to note that load balancing, automatic scaling, and cloud monitoring will each be true web services, with complete APIs for provisioning, control, and status checking. We'll be working with a number of management tool vendors and developers to make sure that their products will support these new services on a timely basis.

So, there you go. What do you think?

-- Jeff;

Amazon Web Services Blog / Thu, 23 Oct 2008 12:52:49 GMT

Sent from FeedDemon

2008年11月1日土曜日

Amazon Cites Momentum as EC2 Exits Beta

いよいよAmazon Web ServiceのEC2(Elastic Computing Cloud)サービスがベータを終了し、本格的に事業として開始される。 これに伴って、SLAも正式なものが提供される模様で、アップタイムを99.5%保証する、当為言った内容。
ベータ終了とSLAの登場で、エンタプライズの利用度が大きく上昇する、と想定されている。
  • Amazon's EC2 compute-on-demand service moved out of beta and into production today, with the key difference being that there's now a Service Level Agreement (SLA) ensuring customer credits should EC2's uptime fall below 99,95 percent. Amazon previously offered an SLA for its S3 storage service, but not EC2. Windows Server and Microsoft SQL Server are now available in beta for EC2, which is also adding a management console, load balancing and monitoring services.

    These additions are the latest advances in Amazon Web Service's transition from a playground for developers into a cloud platform offering on-demand services suitable for startups and enterprises alike. While the definition of "beta" has become decidedly fuzzy (Google has half its products in beta, including Gmail), there's no question that beta status and the lack of an SLA are a barrier to adoption for many enterprises. EC2 has now eliminated those potential resistance points.

  • Will Microsoft Shake Up Cloud Computing SLAs? - Plug Into The Cloud

    Microsoft社のSaaSアーキテクチャ ディレクターであるGianpaolo Carraro氏によると、同社のAzureの強みはSLAの内容の濃さで勝負する、とのこと。
    通常のホスティング事業ですと大体SLAの核となるのはアップタイムの保証で、エンタプライズ事業にとってはそれだけでは足りないはず、と言及。
    まず、自分のアプリケーションが稼動している環境についての情報を常にモニターするツールの整備、もっと詳細なアプリケーションレベルでの環境情報、例えば固有アプリのトランザクションの平均処理時間の保証、システム管理用のAPIの提供、データの動的なバックアップ、ロールバックなど、企業が内部基幹システムを管理する上で通常行うマネージメント業務をCloud Computing環境でも提供する事である。
    具体的には、Azureはユーザに対して、アプリケーションがどこのサーバで稼動するのか指定できるオプションを提供する事を検討している、とのこと。

    Will Microsoft Shake Up Cloud Computing SLAs?

    For Microsoft, which announced its cloud computing platform this week, it seems like service level agreements don't just mean uptime. That thinking could shake up the industry.

    Though Microsoft isn't giving any firm details on its plans for service level agreements with Windows Azure and the Azure Services Platform, presentations and discussions with Azure's developers here have dropped plenty of hints.

    In a presentation at the company's Professional Developers Conference today, Microsoft's Gianpaolo Carraro, director of SAAS architecture in the company's architecture strategy team, said that SLA's shouldn't just mean meeting uptime promises, though that's the common definition of cloud SLA's these days. Companies will typically offer something like a 99.5% uptime promise, and if that isn't met, customers get some sort of refund.

    That just doesn't go far enough, Carraro said. Not only should these uptime promises be available, but instead of the cloud vendor measuring the uptime, customers should be able to have some sort of monitoring meter to understand what the uptime actually was for them, and then reconcile that with the cloud vendor's measure. That may be challenging, since it will be difficult to determine whether it was the cloud vendor, the network provider, or the customer who was actually at fault.

    But that's just the start. He added that cloud SLA's might also include other optional features, such as promises that certain types of transactions will take a certain length of time, management APIs, programmatic access to the health model of a service, the choice to make applications use firewall friendly protocols, the ability to pause or stop an application or a piece of one from running on the fly, and the ability to do things like trigger back-up of data at certain points in time.

    In addition, one of Azure's developers said in an interview, Microsoft is considering SLA's that would enable companies to do things like choose the geographic location where they want their applications to run, though that would likely be expensive.

    These shifts could have profound implications for the services industry, since many vendors today resist setting up even uptime-based SLA's for cloud-based applications because many simply can't deliver on their promises. However, offering SLA's that don't hinge only on uptime should actually be more of a win for service providers, since they hedge problems with uptime with more controllable options. In the long run, offering a plethora of cloud SLA's should even entice hesitant companies to move to the cloud.

    Microsoft's Cloud: Windows Azure

    Microsoft社が先日のPDCで発表したAzure、という自社のCloud Computing環境。
    まだコンセプトの段階で、既存のDynamicsやSharepointなどのSaaS事業をこのアーキテクチャに盛り込んだ以外はファイルのバックアップのサービスが始まった程度で、まだ全体的には時間のかかるもの、と解釈している。
    業界の著名アナリストは意見が分かれており、大方がMicrosoftの戦略をAmazon Web Serviceへの対抗策だ、と言及する一方、Cloud Computing環境で.Net環境を最も最適化した形で提供できるのはMicrosoftしかいないだろう、という、既存の.Net開発環境に関して言えば、Microsoft絶対的有利のコメントをしている人も多い。
    いづれにしても、これで大方のITベンダーがすべてCloud Computingに参入をした、ということになり、Cloud Computing元年はいよいよ次のステップに入りつつある、といえる。



  • Microsoft's new cloud development platform, Windows Azure, was unveiled today by Microsoft chief software architect Ray Ozzie at the Microsoft Professional Developer's Conference in Los Angeles. Windows Azure provides developers on-demand compute and storage to host web applications and services in Microsoft's data centers. Azure provides Microsoft with an online developer platform to compete with Amazon Web Services, Google App Engine and a growing number of smaller platforms.

    Azure was released as a "community technology preview" and won't be available until next year. Few details were released about pricing, with Ozzie saying it would be "competitive" and based on resource consumption.

    "Today marks a turning point for Microsoft and the development community," Ozzie said. "We have introduced a game-changing set of technologies that will bring new opportunities to Web developers and business developers alike. The Azure Services Platform, built from the ground up to be consistent with Microsoft's commitment to openness and interoperability, promises to transform the way businesses operate and how consumers access their information and experience the Web.

  • Roundup: Rackspace's Cloud Acquisitions

    Rackspace社の昨今の買収に関して、業界からの反応。
    Amazon Web Serviceに対する対抗策、という見方が強い中、これによって価格競争と品質(SLA)向上につながるだろう、という見方をしているものが多く、総じてPositiveである、といえる。
  • There was some interesting commentary around the blogosphere on Rackspace's acqusition of Jungledisk and Slicehost to accelerate the rollout of its cloud computing services. Here's some of the analysis:

    • Stacey Higginbotham at GigaOm calls Rackspace's acquisitions "a move aimed at boosting its Mosso cloud offerings in order to step up competition with Amazon's Web Services. But whether or not Rackspace can successfully transition from being a hosting provider to a provider of a truly on-demand cloud offering remains to be seen."
    • Does Rackspace want to transition? Not so long as customers are split between the two formats. "Rackspace has the opportunity to combine the best of both worlds, bringing cloud and traditional hosting together," CTO John Engates tells Information Week.
    • Gartner analyst Lydia Leong says Rackspace's announcement "signals an intent to be much more aggressive in the cloud space than I think most people were expecting."
    • Jason Kincaid at TechCrunch says the Rackspace expansion is good news for developers. "Amazon has been the dominant force in this space for some time, and competition will only decrease prices and (hopefully) lead to an arms race in features, stability, and performance," he writes.
  • Rackspace's deal with Limelight

    Rackspace社、Cloud Computing事業への本格展開を図る中で、Limelight社とのパートナーシップが発表されている。
    Limelight社はCDNサービスを提供するベンダーで、Rackspace社との協業で、Rackspace社のCloud Computing事業であるMosso上でCDNサービスを提供するすることになると想定されている。 これはAmazon Web Serviceが提供を開始したCDNサービスに対抗しよう、というもの。
    CDNを提供すると、規模の大きいインタネット事業をサポートできるようになるため、事業規模の拡大に大きく寄与する、と考えられる。

    Rackspace's deal with Limelight

    Rackspace announced yesterday, as part of a general unveiling of its cloud strategy, a new partnership with Limelight Networks.

    Under the new partnership, customers of Rackspace's Cloud Files (formerly CloudFS) service — essentially, a competitor to Amazon S3 — will be able to choose to publish and deliver their files via Limelight's CDN. Essentially, this will place Rackspace/Limelight in direct competition with Amazon's forthcoming S3 CDN.

    CDN delivery won't cost Cloud Files customers any more than Rackspace's normal bandwidth costs for Cloud Files. Currently, that's $0.22/GB for the first 5 TB, scaling down to $0.15/GB for volumes above 50 TB. Amazon S3, by comparison, is $0.17/GB for the first 10 TB, down to $0.10/GB for volumes over 150 TB; we don't yet know what its CDN upcharge, if any, will be. As another reference point, Internap resold via SoftLayer is $0.20/GB, so we can probably take that as a reasonable benchmark for the base entry cost of CDN services sold without any commit.

    It's a reasonably safe bet that Limelight's CDN is going to deliver better performance than Amazon's S3 CDN, given its broader footprint and peering relationships, so the usual question of, "What's the business value of performance?" will apply.

    It's a smart move on Rackspace's part, and an easy way into a CDN upsell strategy for its regular base of hosting customers, too. And it's a good way for Limelight to pre-emptively compete against the Amazon S3 CDN.

    2008年10月31日金曜日

    Look up at the "Cloud" its getting bigger

    Cloud Computingの市場をマクロで語った記事。
    本記事で登場したIBM、VMWare、GoogleのCloud Computing戦略に加えて、Microsoft社のAzureが新たについて貸された形になり、大手ソフトウェアベンダーがすべてCloud Computingの戦略を打ち出したことになる。
    Cloud ComputingをOSとしてみるのは解釈の違いがあるが、いづれにしても基幹ソフトウェアの新たなコンポーネントであり、それがまたUS企業を中心としたベンダー群で独占される形になっている事は確かである。


    The internet or "cloud" is ever evolving and is a major part of our daily lives from personal to business. Cloud computing is the term that is gaining more and more attention that describes how businesses incorporate software as a service, Web 2.0 and other recent trends that rely on the internet to provide these services to the home and office.

    It is the future that may one day have our computing needs satisfied by a monitor, cell phone and/ or connection to the internet. Everything else will be accessible via the cloud from accessing business applications and other software to storage all on a server hosted by someone else.
    Cloud computing has been confused with Grid computing which is a method of combining many computers to form a cluster of networked systems that are capable of performing large tasks.

    It would seem, however, that the convergence of Grid and the Cloud will someday happen to offer something even larger.

    It is no wonder that so many companies especially the elitists of the data center industry are building and expanding new facilities every month. If the cloud is the next wave they are ready to ride it in.

    The most recent of cloud computing announcements comes from IBM. They have opened four new cloud computing centers in emerging markets. They are in Sao Paulo, Brazil; Bangalore, India; Seoul, Korea; and Hanoi, Vietnam, where there is an increasing demand for Internet-based computing models and skills to help companies compete in highly competitive environments.

    With previously opened centers in both emerging and mature markets, IBM now has 13 cloud computing centers, the world's largest network of expertise on cloud computing.

    If the power and rising cost trend continues it is likely that companies will begin to look ever closer at this option that will provide them a data center without having to man it.

    IBM has been perfecting the cloud computing model for clients around the world. For instance the new facility in Brazil will generate business such as massive scale collaboration programs. As Internet users in Brazil acquire more mobility, cloud computing will make Web-based business operations more efficient.

    "Cloud computing is emerging as a fundamental change in IT approach," said Dr. Willy Chiu, vice president of IBM High Performance On Demand Solutions. "It is a key element of the evolution to a New Enterprise Data Center, and a powerful tool for efficient operations, especially in growth economies."

    The "Cloud" is not only getting attention at the facility level, but at the OS level several new players in the operating system (OS) market have evolved mainly to compete in the niche cloud market.

    Recently VMware announced its new Virtual Data Center Operating System (VDC-OS) and now Google who owns the internet browsing market sees an opportunity to compete against Microsoft by offering its own OS or let us say a web browser that will be fully compatible to handle the future of cloud computing.

    If the portal to the cloud is through a web browser then you bet that Google is not going to allow Microsoft to jump into their turf. Therefore, Google has stepped up and has announced its "Operating System" called Chrome.

    Calling Chrome an operating system may be stretching it a bit, but it is directly designed for the Cloud computing future and will likely become the cloud operating environment.
    Finally, Intel and Oracle are teaming up to implement standards of security and efficiency, and for the overall improvement of cloud-based computing. The two companies are seeing the potential of cloud computing's professional use, and therefore are proceeding to provide a measure of standards and security to the concept, with enterprise users in mind.

    The collaboration will be centered on marrying the Intel's Virtualization Technology (VT) with Oracle grid computing solutions like the Oracle database, Real Application Clusters (RAC), Automatic Storage Management, Application Grid, Enterprise Manager, and VM.
    In helping with standardization, Intel and Oracle said that they will work with other companies to help develop provisioning and managing specifications for cloud-based systems, and would also help in developing standards for the portability of Virtual Machine images, like the Open Virtual Format.

    Keep looking up because the "cloud" is getting bigger.

    GigaSpaces as Alternative to GoogleAppEngine for the Enterprise

    GigaSpaceというCloud ComputingベンダーがAmazon Web Serviceと異なり、いくつかの利点を持っている、という事を説明した記事。
    - Enterprise向けのAPIをサポート(Java、C++、.Net、など)
    - No Vendor Lock In (ユーザを囲い込むような戦略を持たず、あくまでも汎用的なインタフェースを提供)
    - Enterprise CloudとInternet Cloud 両方をサポート。 最近登場しているInternal Cloudをサポートする有数の企業
    - Enterprise Ready(障害対策、セキュリティ、Persisitence Memory、等企業として必要な機能をサポート)
    今後ウォッチする必要のある企業である。 同社はイスラエルの企業である事にも注目。

    I Just came across an interesting post by Josh Heitzman who writes about his negative experience with Google App Engine, which led him to examine a list of Alternatives to Google App Engine. He points out GigaSpaces XAP as one of them:

    One particularly interesting EC2 third party provider is GigaSpaces with their XAP platform that provides in memory transactions backed up to a database. The in memory transactions appear to scale linearly across machines thus providing a distributed in-memory datastore that gets backed up to persistent storage. A lot of the docs reference Java, but the page returned by the aforementioned link states "…deploy applications that use Java, .Net, C++, or even scripting languages…" so after a cursory investigation it is not clear what aspects of their platform is only accessible via Java and which aspects are generally accessible. Bears more investigation.

    [See my comment to the post relating to the question about .Net and C++ support]

    Josh's post raises the question of what is Platform-as-a-Service (PaaS)?

    Platform-as-a-Service is a term used to describe a new set of development platforms that are typically accessible through the web. These platforms enable you to develop new applications easily, without the need to install any software or set up a development and deployment environment. A good example of this is Force.com from Scalesforce.com. Other SaaS providers have similar platforms. It seems that the common motivation behind this trend is to enable the SaaS provider a way to expose their internal framework to other partners and users and build an eco-system around their SaaS product. This led to the emergence of dedicated, proprietary platforms. Google App Engine is a similar effort from Google initiative to expose some of their own platform to external users.

    These platforms, as Josh experienced, were not designed as a general purpose enterprise platform, and therefore, it is not surprising that they lack many of the elements that you would normally expect from enterprise middleware, such as transaction support, security and standard APIs.

    Unlike such Internet based platforms, GigaSpaces XAP was primarily designed as an enterprise middleware platform. It is used in the most demanding mission-critical applications that require extreme scalability and low-latency. During the past year we have extended our middleware platform to the Internet cloud, starting with tight integration with Amazon EC2 and followed with partnerships and integration with leading players in the market, including GoGrid, Joyent, RightScale, CohesiveFTand others. We recently launched our cloud framework in private beta. It enables building enterprise applications on GigaSpaces XAP via the internet. In this way, you can run an application on a hosted GigaSpaces environment, without even downloading the GigaSpaces software.

    What makes GigaSpaces XAP an alternative to Google App Engine for the enterprise?

    • Support for existing enterprise applications:
    One of my previous posts discussed: Google App Engine - what about existing applications?

    In this post, I want to reiterate this point, which goes to the heart of one of the main differences between GigaSpaces XAP and most Internet based PaaS, including Google App Engine and Force.com. Many enterprise applications are already built in Java, .Net or C++. To support enterprise applications, you first need support for these core languages. GigaSpaces XAP not only supports these languages, but also enables efficient interoperability among them.

    • No vendor lock-in:
    While I would argue that lock-in is unavoidable at some level, and that every platform imposes some lock-in, I'd also argue that it is important to examine the nature of the lock-in, and how easy it is to migrate from one platform to another. With XAP, we invested heavily in making lock-in minimal through abstraction, aspects, support for standard APIs and more. As of 6.6, we users can take existing web applications and deploy them on our platform without touching the application code. You can read more about this in: Can scaling be made seamless?
    To make things easy, we published a migration guide that shows step-by-step how you can take existing transactional JEE applications and deploy them on our platform (locally or on the public cloud). We measured the performance and scalability gain you get by running JEE applications on GigaSpaces XAP versus traditional JEE application servers. We ran the exact same application code on both platforms and measured at least 5 times better scalability efficiency and performance increased as outlined here. We also published a new "pet clinic" demo that comes with source code, configuration and documentation, and can be used as a reference guide for running standard JEE application with zero or with minimal code changes. This reference implementation is available here.

    • Designed to support both local enterprise clouds and Internet clouds:
    Although GigaSpaces can now run entirely on a public cloud, such as EC2, it is clear that many enterprises are not ready to run on public clouds, but would rather run their apps on an on-premise, private cloud. Supporting this requires existing development tools used to develop enterprise applications. XAP enables use of common development frameworks (e.g., Eclipse, Maven, Ant, in Java; and Visual Studio in .Net). You can write, test and debug your application locally and then deploy it on the cloud for testing or for production. You can decide at any point where you want to deploy your application, whether on a public Internet cloud, or on a private cloud in the corporate data center. You can also create a hybrid model that involves both a public and private cloud simulataneously.

    • Enterprise-grade reliability and scalability:
    Most Internet-based PaaS impose a radical shift in the way applications are built, and specifically on scalability and reliability. Many of them leave you to deal with failure scenarios on your own, or alternatively, force you to accept the fact that you may lose data if you want to achieve scalability. They also require that you re-write your application if you want to make it scalable.
    Many of the assumptions the platforms operate under are not acceptable to enterprise-grade applications. GigaSpaces XAP was designed to meet the most demanding requirements for maintaining both scalability and 24/7 high-availability without losing any data and without compromising scalability or performance.

    This is only a partial list of differences, but I think that it makes clear how GigaSpaces is different than most Internet-based PaaS offerings, including Google App Engine.

    You can read more about our cloud offering on gigaspaces.com/cloud. If you are interested, try out our new Cloud Framework and see for yourself how easy it is to set up a production-ready cluster with load- balancing and scalability within minutes.

    EVE Online's server model - Massively

    EVEと呼ばれる、世界で最大のオンラインゲームMMO(Massive MultiPlayer Online)サーバの記事。
    EVEはすべてのプログラム、データをSingle Instanceで運用しており、全く分散化していない、との事、またそればさまざまな問題を起こしている要因担っている、という記事。
    内容はゲーム専用用語が多く、ちょっとわかりにくいが、スケーラビリティを議論する上で参考となる記事。


    Almost any time a discussion about EVE Online comes up, one way or another we end up talking about the server. EVE Online is unique among today's most popular MMOs for its single-server approach. While most MMOs deal with large number of users by starting up large numbers of separate servers with identical game universes, EVE maintains only a single copy of its game universe on a massive cluster of servers. CCP's decision to go with a server model that doesn't use any sharding or instancing whatsoever has had a major impact on in-game activities and how the game has developed.

    Server woes:
    Unfortunately for CCP, maintaining their vision of a single game universe has proven a lot more difficult and costly than anyone anticipated. Working with IBM, the EVE server cluster is maintained in London and is currently the largest supercomputer employed in the gaming industry. Even with this massive power behind the EVE universe, there are still problems as CCP tries to keep the server upgraded ahead of its ever-expanding playerbase.

    In this article, I discuss the unique gameplay that is possible thanks to EVE's server model, the problems the server currently faces and what CCP is planning to do about it.

    With a single game world, players are free to flock to whatever solar system they like even if the server that solar system is on can't handle the load. Popular trade hub Jita and several popular mission-running systems such as Dodixie suffer badly from lag at peak play times on the weekend. Massive fleet battles over important territory often suffer from the same crippling lag, turning what should be an amazing sci-fi space battle into a poor slideshow.

    Server structure:
    Each of EVE's 5000+ star systems is loaded as a separate process onto any one of hundreds of IBM blade servers, with some high-load systems being given a server all to themselves and many low-load systems being combined and run on servers together. These "SOL Servers" are tied into EVE's main database server where changes to the game take place (where the magic happens).

    Since players need to move between solar systems, they are connected to proxy servers which keep track of which SOL server the player is on. It's an ambitious system but has worked well for over five years with constant upgrades going on in the background to keep up with the increasing number of players entering EVE daily.

    Effect on PvP:
    You could be forgiven for thinking that an MMO's server model doesn't affect its gameplay significantly but EVE Online has proven this wrong for five years running. Putting all players together in one server drastically increases the opportunity for PvP. Instead of the MMO norm of less than 5,000 potential players for you to interact with and barely 1000 online at peak hours, EVE's server houses over 300,000 players with a peak concurrent user record of over 40,000. Additionally, since there's only one server for all players, there's no option to sign up to a non-pvp version of the game. This puts all players in the same world with the same rules whether they like it or not. If all you want to do is trade, mine and run missions you're just as vulnerable to PvP as everyone else and that's a major factor in defining the harsh feel of the EVE universe.

    If EVE did offer a non-PvP server option, roles such as the pirate or corporate spy wouldn't really be possible any more because most potential targets would be playing on the non-pvp server. The players on the non-pvp server would also suffer from having a duller, less challenging game experience. We'd have one server full of hunters with no prey and one server full of prey with no excitement to their game.

    Ultima Online experienced this issue in the Renaissance expansion when they released Trammel, a server where non-consensual PvP was no longer possible. With all the cut-throat villains separated from the general population, the villains had nothing to do and the remaining players lost their opportunity to be heroes.

    Territorial conflict:
    The lack of instancing in EVE Online's game universe has had an even more profound impact on PvP than the lack of a non-PvP server. When a solar system is depleted of resources, is becoming overcrowded or is being camped by pirates, there is no second instance of the system to switch to. The ability to pursue attackers from system to system successfully or to lock down a star system and prevent your enemy from passing through allows for piracy and very real territorial control that just isn't possible with another server model. Conflict over resources and space arises as a natural consequence of gameplay and not from a developer-defined game mechanic. Real player alliances are forged and broken every week in EVE with complex politics behind them.

    Economic impact:
    EVE is often lauded for its realistic player-based economy and real working markets but neither of these would function well on a sharded server. Throwing all of the players together in one place forces the markets to act based heavily on the rules of supply and demand. Without enough players driving both sides of the supply and demand curve, a single player could interfere with the global market very easily for a very long time. This has been done before in games like World of Warcraft to manipulate prices for a profit. It worked because with so few players on each server, one particularly rich player's effect on the market can be proportionally massive.

    In EVE's safer systems, even major price manipulations tend to be balanced out by other players in a matter of hours, making price manipulation in trading hubs a very expensive and risky venture. It's said that the number of players in EVE caused the markets to hit critical mass long ago, reaching a point where high demand is almost always met by players with adequate supply within reasonable time-frames. As a result, the game's market hubs are always stocked full of whatever you might need.

    Upgrades:
    CCP recently encountered a problem they hadn't seen since late 2005. Certain server nodes were running out of memory, filling up with legitimate user data and crashing. Their response has been a controversial change to introduce player limits to star systems under heavy load. Although this was changed to only affect trade hub Jita for now, it highlights hardware inadequacy that CCP are meeting head-on with another round of server upgrades. The current server hardware uses impressive processors and advanced solid-state RAMSAN disks with the fast access speeds and large storage capacity that EVE's servers require.The bottleneck at the moment is getting data from one processor or ram disk to another and this is where their latest project comes in.

    CCP aims to link the processors and RAM drives of every SOL server together with high-speed low-latency "Infiniband" technology, allowing data transfer at rates of several gigabytes per second. This will allow any processes which can be threaded to be split off and run on a processor that's not being used heavily at runtime, which should massively increase the server's load-balancing ability. The infiniband project poses a huge task for EVE's programmers, who are in the unenviable position of having to rewrite large portions of the core server code. If all goes well with their internal infiniband tests, these major changes in server architecture could eventually spell the end of laggy fleet battles and node crashes.

    Summary:
    With the problems CCP are constantly facing with their server and the cost of its upkeep, other developers seem reluctant to take EVE Online's server model on board. However, this model affects a lot more than its running costs and complexity and may be practically required for any successful next-generation PvP based MMO. It makes avenues of gameplay such as meaningful politics, piracy and real territorial warfare not just a possibility but an unavoidable consequence of group play. Could the single server approach become commonplace in the next generation of PvP-based MMO? I, for one, hope it does.

    Ingram Micro Seismic Shakes Hands With ConnectWise

    Ingram Micro社は米国での最大級の代理店業者であるが、この業界でもCloud Computingサービスが成長している。 Ingram Micro社はSeismicというサービスを提供しており、この記事は、SeismicでAutotaskやConnectWiseといったProfessional Service Automation、というSEプロジェクト管理の自動化ソリューションをSaaS事業として提供している、という記事。

    When it comes to partnering in the managed services market, the Ingram Micro Seismic team doesn't appear to be playing favorites. In addition to working closely with Autotask, Ingram is now reaching across the aisle to work with ConnectWise — one of Autotask's top professional services automation (PSA) software rivals.

    According to a ConnectWise press release:

    With ConnectWise, solution providers subscribing to Ingram Micro's Seismic Remote Management and Monitoring (RMM) and Help Desk now have the ability to leverage "two-way" RMM integration. Incidents identified through Seismic RMM will create service tickets in ConnectWise, where they can be reviewed, updated and closed by solution provider employees.

    Since MSPmentor doesn't test software platforms, I can't say whether Ingram is working more closely with Autotask or ConnectWise. However, it's clear that Ingram is committed to letting customers choose the professional services automation (PSA) software that best fits their businesses.

    20 Cloud computing startups - analysis

    Cloud Computing企業の代表20社を比較。
    自社のホスティング環境を持っているか、他社のホスティング環境の上にCloud Computing環境を作っているのか、という点でまず分類、更に企業内にCloud Computing環境を構築できるソリューションを持っているかどうか(Internal Cloud)で区別。
    Internal Cloud Computingソリューション企業が増えてきている事が印象的。 今後成長の兆し。

    I was pointed to John Foley's InformationWeek article earlier this week of "20 Cloud Computing Startups You Should Know." Aside from the fact I could only count 19, it was a great survey of what types of companies, ideas and ventures are getting on the bandwagon.

    The quick-and-dirty chart above is mine; what I found so interesting is that 8 of the players are building solutions on top of other clouds (like Amazon's EC2 and S3) while another 7 are investing in essentially building hosted services.

    However, only 4 (ok, maybe 4-1/2) are thinking/trying to bring "cloud" technologies and economics to the enterprise's own internal IT. This certainly attests to the difficulty in reworking IT's entrenched technologies, and building a newer abstracted model of how IT should operate.

    Even though Cassatt wasn't mentioned in the survey (maybe we were supposed to be #20) we also play in the "build-an-internal-cloud-with-what-you-have" space.

    This model -- that of an "internal cloud" architecture -- will ultimately result in more efficient data centers (these architectures are highly efficient) and ones that will be able to "reach out" for additional resources (if-and-when needed) in an easier manner than today's IT.

    I'd look to see more existing enterprises considering building their own cloud architectures (after all, they've already invested lots of $$ in infrastructure) while startups and smaller shops opt for the products that leverage existing (external) cloud resources.

    BTW, John also just posted a very nice blog of a "reality check" to curb some of the cloud computing hype.

    2008年10月29日水曜日

    Decades of experience with Clouds: Telcos

    Cloud Computingを何十年も前から事業として実践している事例としてテレコム業界を紹介したSDForumというイベントでのプレゼンテーションに対するコメント。
    社内電話システムをPBX(Internal Cloud)でシステム化する一方、外部とのコミュニケーションは公衆回線を使う(External Cloud)ユースケースは、今後のCloud Computingの進む方向を示唆する一つの案かもしれない。
    特にInternal Cloudの必要性についてはよくわかる。
    While at yesterday's SDForum meeting on cloud computing, a panelist pointed out that we've been living with (a form of) cloud computing for decades. It's called Telephony.

    On reflection, the telcos do give us an interesting model for what PaaS *could* be like, and a metaphor for types of cloud services. To wit:
    • As users, we don't know (or care) where the carrier's gear is, or what platform it's based on so long as our calls are connected and our voicemail is available.
    • There isn't technical "lock-in" as we think of it. Your address (phone number) is now portable between carriers, and the cloud "API" is the dial tone plus those DTMF touch-tones
    • I can "mash-up" applications such as Voicemail from one company, conference calling from another, and FAX transmission from a third.
    • There are even forms of "internal clouds" in this model -- they're called PBXs (private branch exchanges) which are nothing more than "internal" phone switches for your business
    This last point interests me the most - that enterprises have economic and operational needs (maybe even security needs!) to manage their own internal phone systems. But inevitably, workers may have to use the public phone system, too.

    Similarly, many enterprises will need to retain 100% control of certain computing processes and never outsource to a cloud; They'll certainly be attracted to the economics that external computing resources offer, but will eventually build (or acquire) a similar *internal* capability. Just wait.

    2008年10月28日火曜日

    Would you buy an Amazon EC2 appliance?

    Internal Cloud Computingに関する興味深い記事。
    企業の既存のリソースを切り離して、いきなりCloud Computingに自社の資産を移行する、という戦略は現実的ではなく、既存の資産を活かしながらCloud Computingの環境を付加価値として適用していく、というアプローチが自然な発想として生まれてくる。
    その一つの形がInternal Cloudというコンセプト。 この記事は、Amazon Web Servicesがもしかしたらそのような機能を提供するアプライアンスを計画しているのではないか、という予測記事。
    真意は別として、企業IT環境の中にCloud Computingレイヤーをつくり、CPU、ストレージ、ネットワーク等のI/Oリソースの利用効率を高めるようなソリューションを提供するベンダーはいくつか登場している。
    Amazon Web Serviceにアプライアンスをあえて選ぶ意味は、基本的にシステムは企業IT環境の中で構築し、しかしリソースの内何かがキャパシティを超えた場合に自動的にAWSのネット上のサービスが補完をしてくれる、という機能を統合したアプライアンスが製造可能なのでは、という発想に基づく。
    機能の補完はストレージだけではなく、CPU処理能力やその他のコンポーネントでもダイナミックにScaleする事が出来る、というコンセプトはCloud Computingの適用方法として非常に興味深いアプローチである、といえる。
    Before you scream "a what?" I'm only posing this as a thought experiment...

    But the concept was recently put forth as an illustration
    at last week's SDForum by an attendee. I kind of thought about it for a few minutes, and realized that the concept isn't as crazy as it first sounds. In fact, it implies major changes for IT are on the way.

    First of all, the idea of a SaaS provider or web service provider creating a physical appliance for the enterprise is not new. There's the Google search appliance, but I also expect providers like Salesforce.com to do the same in the near future. (There are some very large enterprises that want to be 100% sure that their critical/sensitive data is resident behind their firewall, and they want to bring the value of their SaaS provider inside.)

    So I thought, what would I expect from an Amazon EC2/S3 appliance to do? Similar to Google's appliance providing internal search, I'd expect an Amazon appliance to create an elastic, resilient set of compute and storage services inside a company, and it could support any/all applications no matter what the user demand. It would also have cost-transparency, i.e. I'd know exactly what it cost to operate each CPU (or virtual CPU) on an hourly basis. Same goes for storage.

    This approach would have various advantages (plus a small limitation) to how IT is operated today. The limitation would be that its "elasticity" would be limited by the poolable compute horsepower within an enterprise. But the advantages would be huge -- who wouldn' t like a cost basis ~$0.10/CPU-hour from their existing resources? Who wouldn't like to shrug-off traditional capacity planning? etc. etc. AND they'd be able to maintain all of their existing compliance and security architectures, since they were still using their own behind-the-firewall facilities.

    Does it still sound crazy so far?

    NOW what if Amazon were to take one little extra step. Remember that limitation above -- the what-if-I-run-out-of-compute-resources issue? What if Amazon allowed the appliance user to permit reaching-out to Amazon's public EC2/S3? Say you hit peak compute demand. Say you had a large power outage or a series of hardware failures. Say you were rolling-out a new app and you couldn't accurately forecast demand. This feature would be valuable to you because you'd have practically infinite "overflow" -- and it would be valuable to Amazon since it would drive incremental business to their public infrastructure.

    To be honest, I have no idea what Amazon is planning. But I DO know that the concept of commercially-available software/hardware to create internal "clouds" is happening today. And not just in the "special case" of VMware's "VDC-OS", but in a more generalized approach.

    Companies like Cassatt can -- today -- take an existing compute environment, and transform its operation so that it acts like an EC2 (an "internal cloud"). It responds to demand changes, it works around failures, and it optimizes how resources are pooled. You don't have to virtualize applications if you don't want to; and if you do, you can use whatever VM technology you prefer. It's all managed as an "elastic" pool for you. And metered, too.

    To be sure, others are developing similar approaches to transforming how *internal* IT is managed. But if you are one of those who believes in the value of a "cloud" but wouldn' t use it, maybe you should think again.

    IT Cloud Services Forecast - 2008, 2012: A Key Driver of New Growth

    IDCの市場予測:  2012年のCloud Computingの状況予測
     
    現時点ではCloud Computingは全IT予算の4%程度しか占めていない。 これが年平均27%の成長を続け、2012年になると9%までに成長する事が予測されている。  9%程度であれば大した市場のカバーではないと思われるが、年27%の成長率は従来のITの成長率の5倍以上である事が注目に値する。 
     
    この急成長の要因として企業インフラに容易に導入できる事が大きな理由として挙げられている。
     
    また、ビジネスアプリケーション(SaaS)がCloud Computing市場の半分以上を占め、その傾向は続く、と予測している。  これはアプリケーションが一番費対効果が大きい、と考えれている事が大きな理由。さらに、SMB市場がSaaSを本格的に採用し、市場を大きく成長させる事が想定されている。 
     
     


    IT Cloud Services Forecast - 2008, 2012: A Key Driver of New Growth

    In our previous posts on the IT industry's shift to the Cloud Services era, we've provided definitions, market context, user adoption trends, and user views about cloud services benefits, challenges and suppliers.  

    In this post, We offer our initial forecast of IT cloud services delivery across five major IT product segments.we offer our initial forecast of IT cloud services delivery across five major IT product segments that, in aggregate, represent almost two-thirds of enterprise IT spending (excluding PCs).  This forecast sizes IT suppliers' opportunity to deliver their own IT offerings to customers via the cloud services model ("opportunity #1", as described in our recent post Framing the Cloud Opportunity for IT Suppliers).

    The development of this forecast involved a team of over 30 IDC analysts, led by Robert Mahowald (Business Applications/SaaS), Tim Grieser (Infrastructure Software), Steve Hendrick (Application Development & Deployment Software), Matt Eastwood (Servers) and Rick Villars (Storage), with additional contributions from David Tapper (Outsourcing/Hosted Services) and John Gantz (Global Research).  
     

    An Opportunity In Its Infancy - But, Even Conservatively, Poised to Drive Big Marginal Growth

    Of the $383 billion customers will spend this year within the five major IT segments noted above, $16.2 billion - or a mere 4% - will be consumed as cloud services.  By 2012, customer spending on IT cloud services will grow almost threefold, to $42 billion.By 2012 - based on a conservative forecasting approach (see "fine print" below) - customer spending on IT cloud services will grow almost threefold, to $42 billion, accounting for 9% of customer spending.

    What does that mean?  On one level, one could argue that - in spite of the all the buzz about Cloud Computing and Cloud Services - this model will not even crack 10% of IT spending four years from now. And therefore, one could reasonably ask: why all the fuss?

    CLICK IMAGE to ENLARGE


    One reason IT suppliers are sharpening their focus on the "cloud" model is its growth trajectory, which - at 27% CAGR - is over five times the growth rate of the traditional, on-premise IT delivery/consumption model.  Spending on IT cloud services is growing at over five times the rate of traditional, on-premise IT.As noted in our recent user survey, this rapid growth is being driven by the ease and speed with which users can adopt these offerings, as well as the cloud model's economic benefits (for users and suppliers alike) - which will have even greater resonance in the current economic crisis.

    Even more striking than this high growth rate, is the contribution cloud offerings' growth will soon make to the IT market's overall growth.  By 2012 - even at only 9% of user spending - cloud services growth will account for fully 25% of the industry's year-over-year growth in these five major segments.  In 2013, if the same growth trajectories continue, IT cloud services growth will generate about one-third of the industry's net new growth in these segments.

    CLICK IMAGE to ENLARGE


    The implication for IT suppliers is clear: during the next several years, IT suppliers must position IT suppliers must position as leaders in IT cloud services or forfeit an ever-expanding portion of the industry's growth.themselves as leaders in IT cloud services or forfeit an ever-expanding portion of the industry's growth.  Cloud services' accelerating impact on IT industry growth is consistent with the key insight from our cloud services user survey data: that IT cloud services are at a "crossing the chasm" moment, the point at which suppliers must step up their commitment to the new technology or model, and the point at which failure to do so starts to exact harsher penalties on supplier performance.  
     

    Applications Are Leading the Way - and Will Continue To Do So

    Among the five enterprise IT segments we analyzed, Business Applications dominate cloud services spending, both in 2008 (57%) and in 2012 (52%). 

    CLICK IMAGE to ENLARGE


    This should not be very surprising: Software-as-a-Service (SaaS) is the most mature and widely deployed form of IT cloud services, in contrast to the more nascent cloud infrastructure offerings.  And Business Applications - in which, for this forecast, we include Collaboration offerings - have consistently been the largest portion of the SaaS market.  

    Further, as we noted in IT Cloud Services User Survey, pt.1: Crossing the Chasm, Geoffrey Moore identifies applications (vs. component technologies) as the most successful offerings for crossing the chasm: they appeal to the line-of-business constituencies outside the IT department, who are most frustrated by the old model, and are most open to embracing new approaches.  

    Another reason for the dominance of applications in One reason for the dominance of applications in IT cloud services spending is the role that SMBs will play.IT cloud services spending is the role that SMBs will play in this IT industry transformation.  As we've noted many times, the opportunity to open up under-served SMB segments, in both developed and emerging markets, is the primary motivation driving many IT suppliers toward the cloud model.  And SMBs' IT investments are driven - much more than large enterprise investments - by applications.  

    The implication of the application-centricity of the current The most direct path to becoming a successful player in the cloud is to have strong links to the application world.and near-term IT cloud services market is also clear: the most direct path to becoming a successful player in the cloud is to have strong links to the application world.  This means, for example, becoming a SaaS provider, becoming a SaaS platform provider, or - for those in non-application parts of the IT market - becoming a key partner of SaaS application or platform players.  (More on this in later posts.) 

    One other item of note in the IT cloud services spending shown above is the rapid growth in cloud storage.  Our storage analysts believe - and I concur - that the explosive growth of information in the cloud (and outside it) will, more than in any other infrastructure category, drive direct end user demand for storage in the cloud.

     

    "Fine Print":  Important Notes About This Forecast

    Forecasts about emerging models and offerings are rarely perfect predictions of the future.  Here is some "under the covers" information about this forecast that will be useful in thinking more deeply about this forecast and its implications:

    • An  "End-User-Centric" View: These figures represent enterprise end-user demand for IT products and solutions, through both on-premise and cloud services models. By "end users" we mean businesses that consume these IT products and solutions either for their internal use, or as an "under-the-covers" ingredient within their offerings to the marketplace. Excluded from this forecast is spending by cloud services providers who are simply reselling the product/solution, without value-add other than the delivery model transform; we consider such services providers as resellers - the true "end-users" are their customers. In contrast, cloud services providers who are not explicitly reselling the forecasted IT product/solution as a service, but are using it as a supporting ingredient within their offerings, are considered end-users (e.g., Salesforce.com, a cloud services provider of CRM software, is counted as an end-user within the storage, server, and other IT segments outside of its own primary product/service segments [business applications, application development/deployment]).
    • A Conservative Approach and Track Record: This forecast is on the conservative end of the spectrum. Our goal, as usual, is to be "anti-hype" - to recognize and highlight the disruptive trends in the market, but to avoid a forecast "bubble". That was our track record in forecasting Internet adoption in the late 1990s, and our Internet forecasts have held up extremely well - through, and beyond, the Internet Bubble period. We have also had a conservative track record in forecasting the SaaS market, for which we have traditionally underestimated growth, and increased our forecast significantly each of the past several years. If you have a more aggressive view of IT Cloud Services adoption, the other end of the spectrum - a more aggressive forecast - could well be 1.5-2 times the spending level in the forecast above.
    • Watch "Conditions On the Ground": The ramp-up scenario for IT cloud services is very fluid - the forecast will be greatly impacted by: 1) major vendors' degree of aggressiveness in developing and promoting cloud offerings, 2) the rate at which partner ecosystems morph to adapt to - and drive - the cloud model, and 3) macroeconomic factors - such as impact of the current global economic crisis.  In our view, while the economic crisis could negatively impact the growth of this market, it is more likely that it will accelerate the roll out and adoption of Cloud IT services, because of the model's greater affordability (vs. traditional IT offerings), and IT's critical role in supporting much-needed innovation and economic growth.
    • IT Cloud Services Adoption Will Drive (but Shift) On-Premise Demand: It is important to note that while end-users certainly consider "on-premise" vs. "cloud services" as alternative (and competitive) options for specific solutions, the cloud services delivery model for those solutions will not, for the most part, subtract from on-premise IT demand. In fact, end user IT cloud services demand will actually drive demand for on-premise IT products and solutions - but it will shift that demand to cloud services providers. This makes it extremely important for suppliers of IT products and solutions to develop detailed understanding of the changing routes to market, including the role of cloud services providers, both as end-users and as a new and growing channel.
    • Some Definitional Details:  Here are the submarkets we included in each of the five major IT segments in the forecast:
      • Business Applications:  includes Collaborative applications (such as Messaging, Conferencing and Team collaboration software), and Business applications (such as CRM, ERP, Financial, HCM, PLM and SCM).
      • Application Development & Deployment Software:  includes Application Development software, Application Lifecycle Management software, Enterprise Mashup & Portal software, Information Management & Data Integration software, and Middleware & Business Process Management software.
      • Systems Infrastructure Software: includes System and Network Management software, Security software, Storage Management software, and System software.
      • Storage: includes Disk Storage.
      • Servers: includes all classes of Servers.


    [The following IDC analysts contributed to this IT Cloud Services analysis and forecast: Michelle Bailey, Darren Bibby, Ray Boggs, Jean Bozman, Brian Burke, Chris Christiansen, Laura DuBois, Matt Eastwood, Mike Fauscette, John Gantz, Frank Gens, Al Gillen, Tim Grieser, Steve Hendrick, Martin Hingley, Mark Levitt, Robert, Mahowald, Stephen Minton, Chris Morris, Henry Morris, Brad Nisbet, Melanie Posey, Dave Reinsel, Christina Richmond, Sandy Rogers, Jed Scaramella, Rona Shuchat, Will Stofega, David Tapper, Vernon Turner, Rick Villars, Janet Waxman, Melissa Webster.]

     

    IDC eXchange / Wed, 08 Oct 2008 19:54:49 GMT

    Sent from FeedDemon