2009年4月7日火曜日

Facebook's growing infrastructure spend

Facebookが急成長を遂げており、2009年度はインフラ増強のための資金調達に奔走する模様。
  • USのみならず、海外からのユーザの増加が著しい
  • 世界中のインターネットのユーザの24%を既にカバー特に欧州での成長は早く、ヨーロッパでのデータセンターの建築が急がれる模様。
  • 2008年8月の時点で、保有サーバ数は10,000台を超えている。内訳は明確ではないが、Rackable社からは$68Mの買い物をしたり、NetAppやForce10社のネットワーク機器の購入をしている。
 

Facebook's growing infrastructure spend

Facebook logo

On Thursday BusinessWeek reported Facebook is seeking new financing for its data center operation growth in 2009. Facebook continues to add new members and their associated content at an extremely fast pace, with most new growth coming from international markets. Facebook needs to expand its abilities to serve these markets by bolstering current infrastructure offerings and cutting latency to its members through new international points of presence. In this post I will take a deeper look at Facebook's current computing infrastructure and related expenses and examine likely new areas of investment in 2009.

Facebook members

Facebook currently has over 160 million members in its top 30 markets. Facebook enjoys a 24% market penetration across all 30 countries, including complete domination in Chile and Turkey, where 76% and 66% of all Internet users are members of Facebook. Facebook member numbers are taken from Facebook.com; total Internet users for each country is as reported by the CIA World Factbook.

Chile
5,570,000
75.61%
Turkey
13,150,000
66.32%
Denmark
3,500,000
57.1%
Greece
2,540,000
50.69%
Australia
11,240,000
46.66%
Norway
3,800,000
45.52%
Venezuela
5,720,000
44.29%
United Kingdom
40,200,000
43.05%
Hong Kong
3,961,000
40.01%
Canada
28,000,000
39.71%
Belgium
5,220,000
39.1%
Argentina
9,309,000
36.4%
Colombia
12,100,000
36.4%
Singapore
3,105,000
32.26%
Switzerland
4,610,000
29.16%
France
31,295,000
28.4%
Sweden
7,000,000
27.68%
South Africa
5,100,000
27.26%
Italy
32,000,000
26.47%
United States
223,000,000
24.55%
Spain
19,690,000
20.95%
Philippines
5,300,000
19.08%
Indonesia
13,000,000
17.78%
Egypt
8,620,000
13.72%
Mexico
22,812,000
8.8%
Malaysia
15,868,000
7.21%
Germany
42,500,000
4.43%
India
80,000,000
1.95%

Facebook's current infrastructure serves North America with minimal latency. Future expansion into Europe and southeast Asia are likely as Facebook tries to expand its international audience.

Data centers

Map data ©2009 Tele Atlas, Europa Technologies - Terms of Use
Map
Satellite
Hybrid

Facebook currently operates out of four data centers in the United States: three on the west coast and one on the east coast. Facebook leases at least 45,000 square feet of data center space.

Switch & Data's PAIX at 529 Bryant Street in Palo Alto is just around the corner from the Facebook offices and a long-time home to Facebook servers. It's unclear how much of the 100,000 square-foot, liquid-cooled data center is currently occupied by Facebook.

Facebook has been with Terremark's NAP West in Santa Clara since November 2005. Facebook originally leased 10,000 square feet but may have grown larger over the years. Facebook is still listed as a Terremark customer in Santa Clara but the company might be consolidating its operations into its new local data center.

Facebook geo-distributed its web operations in 2008 with DuPont Fabros' ACC4 in Ashburn, Virginia. Facebook leased 10,000 square feet in 2007 and occupied the space in 2008 after extensive reworking of the Facebook backend. Facebook shares ACC4 with MySpace, Google, and other competitors.

In January 2009 Facebook moved into its first exclusive data center, Digital Realty Trust's 1201 Comstock Street in Santa Clara. The 24,000 square feet of data center space operates at a PUE of 1.35, a respectable mark against reported 1.22 marks for Google and Microsoft. Facebook leases the center from Digital Realty as its sole occupant.

Facebook is rumored to be adding an additional 20,000 square feet of data center space in Ashburn, Virgnia in DuPont Fabros' ACC5. Facebook is expected to move into ACC5 in September 2009 and place new servers online by the end of the year.

Facebook recently announced an international headquarters in Dublin, Ireland that will include "operations support" across Europe, the Middle East, and Africa. A European data center is a likely expansion point for Facebook as they try to solidify their European offerings.

Server loans

Facebook loan growth

Facebook paid for part of its infrastructure expansion through specialized debt financing from TriplePoint Capital. Facebook drew down $30 million in 2007 followed by another $60 million in 2008. BusinessWeek reports Facebook is currently trying to secure as much as $100 million in debt financing for its next round of growth.

Debt financing against physical assets such as servers and office buildings offer lower rates than a traditional venture capital round. Facebook's server expenditures have a recoverable resale value mapped over a depreciating lifespan, unlike direct and unrecoverable payments to employees and service providers. Lenders such as TriplePoint are a specialized type of real estate investor, a market with huge risk premiums in the current market. Facebook's $100 million debt financing is bigger than TriplePoint's typical investments, placing Facebook's new expansions beyond the investment strategy's scope during a time of real-estate investment turmoil. Facebook needed to look to other financing operations for bigger infrastructure loans, an expected move for the growing company.

Facebook spent $68 million on Rackable servers in 2007 and early 2008, likely as a result of their Virginia data center build-out. Facebook is also rumored to be a large consumer of premium-priced proprietary hardware NetApp storage appliances and Force10 networking.

Facebook's debt financing agreement with TriplePoint Capital expired a few months ago, leading the company to seek new sources of financing for its new Santa Clara data center and other expansion plans. Facebook is in discussions with Bank of America for additional loans against this capital expenditure according to BusinessWeek.

How many servers?

Facebook had over 10,000 servers as of August 2008 according to Wall Street Journal coverage of a presentation by Jonathan Heiliger, Facebook's VP of Technical Operations. Facebook signed an infrastructure solutions agreement with Intel in July 2008 to optimally deploy "thousands" of servers based on Intel Xeon 5400 4-core processors in the next year.

memcached
~800 memcached servers supplying over 28 TB of memory.
Hadoop
~600 servers with 8 CPUs and 4 TB of storage per server. That's 4800 cores and about 2 PB of raw storage!
Storage
Facebook adds more than 850 million photos and 7 million videos to its data store each month. That's a lot of Filers.

Facebook uses Akamai and other CDN providers to serve static content to visitors around the world. It's an expensive service offering not covered by Facebook's server debt financing.

Summary

Facebook's faces difficult infrastructure challenges as the company tries to keep up with explosive growth around the world. Current shocks in the real estate investment market have made property financing difficult for all companies, including Facebook. New infrastructure moves from Facebook coming online this year should lower total operating costs per server thanks to new efficiencies in the cost of power and a decline in leasing price per square foot as Facebook buys in bulk. I expect new deals with foreign governments such as Ireland will lead to new expansion by Facebook heavily influenced by the ex-Googlers on staff who have paved this path before.

Facebook is a privately-held company, offering limited insights into its expenses and other operations. The company seems to be repricing its server debt financing each year and has just crossed into the capital lending realm of big banks not easily able to take big risks in their property portfolios at the moment.

Sun Cloud Will Live at the Vegas SuperNAP

Sun Microsystems社の新しいCloud Computing戦略は内容がまだ明らかにされていないが、ホスティングはラスベガスにあるSwitch Communications社のSuperNAPになる、という事が発表されている。 
 

  • Sun Cloud Will Live at the Vegas SuperNAP

    March 10th, 2009 : Rich Miller

    Sun Microsystems isn't yet saying what its new cloud computing service will look like, but at least we know where it's going to live. Sun will host its new cloud offering in the SuperNAP, Switch Communications' new mega-data center in Las Vegas, according to Sun CTO Greg Papadopoulos.

    "We now have thousands of cores at the SuperNAP," Papadopoulos said in this morning's keynote address at AFCOM's Data Center World conference at the Paris Hotel in Las Vegas. "It's a really fascinating facility."

    The hosting arrangement extends the relationship between Sun and Switch Communications, which is already hosting Sun's Network.com operation in a high-density section of SwitchNAP 4 in Las Vegas known as a T-SCIF heat management system (short for Thermal Separate Compartment in Facility). The T-SCIF uses containment systems to fully separate the hot and cold aisles, allowing the Network.com racks to run at 1,500 watts a square foot. See this video for a closer look at the Network.com T-SCIF installation at Switch.

  • SAS schemes $70m biz analytics cloud

    Business Analyticsソフトウェアの老舗、SAS社が自社のデータセンタを増設し、Cloud Computing環境を構築し、SaaS事業を本格的に展開する事を発表。 
     
    既に一部の顧客に対してOnDemandアプリケーションサービスを自社のデータセンターを利用して提供していたが、そのニーズが年30%を超えるようになり、事業として正式に開始する事を判断。 
     
    企業の機密情報をを取り扱うケースが多いアプリケーションだけに、Public Cloudを利用する事に懸念を表明する顧客が多く、このようにSAS内部のインフラでサービスを提供する事が唯一のソリューションである事も自社運営にいたる判断材料だったといえる。
     

    SAS schemes $70m biz analytics cloud

    Business-analytics software developer SAS Institute is taking to the clouds. But rather than stake the future of its hosted-application business on existing clouds such as Amazon AWS, SAS has decided to shell out $70m (£48m) to build its own cloud-computing facility.

    The Cary, North Carolina software house has seen exponential growth since its founding in 1976. Although privately held and thus not obligated to provide any financials, the company posted sales of $2.26bn (£1.56bn) in 2008 and invested a staggering 22 per cent of revenue in research and development.

    The only time sales at SAS slowed were in the dot-bomb years of 2001 and 2002 - and even then the company managed to grow a tiny bit. With over 11,000 employees, SAS is the world's largest privately held packaged-software provider, known for being simultaneously conservative and forward-thinking.

    Keith Collins, who led the creation of the SAS 9 business-analytics suite and who runs the company's research and development operations, also serves as the company's chief technology officer. According to Collins, SAS has a data center packed with Sun Sparc/Solaris servers and NetApp and EMC storage arrays that is used to host versions of its applications for customers. Collins says that SAS doesn't even advertise this hosted-application business, but customers found out about it anyway and business is growing "in excess of 30 per cent per year."

    The SAS OnDemand apps include applications to detect money laundering, to drive drug discovery (in the lab, not at the airport), and to perform marketing analysis on various fronts.

    SAS thinks it's on to something, but it wants to build a more cloud-like facility than its current Solaris farm. Collins says that for the past four years the development side of SAS has been a heavy user of server-virtualization tools from VMware, using the ESX Server hypervisor and related staging tools for development and test environments.

    "We expect the hardware to shift significantly over the next few years," he says, adding that SAS hasn't yet decided on the iron and software it will use to build its cloud. Collins did, however, hint strongly that it will use x64 servers with fast local storage backed up by a storage area network, and will very likely use virtualization tools from VMware.

    The new cloud facility will be easily an order of magnitude or more larger - in terms of compute and storage capacity - than the current facilities used to host the SAS Solutions OnDemand apps.

    The 38,000 square-foot facility that SAS is building on its Cary campus will have two 10,000 square-foot server farms. The first farm is expected to be online in mid-2010 and is expected to support growth for hosted applications over the next three to five years. The second will be fitted out with servers and storage when the first farm hits 80 per cent capacity.

    The construction of the data-center facility and related office space will account for between $20m and $22m (£13.8 and £15.2) of the $70m budgeted for the cloud, with the remainder going for servers, storage, and software. SAS is keeping 60 per cent of the construction and equipment spending in North Carolina and says that construction will provide about 1,000 jobs.

    SAS may not have a choice but to build its own cloud. Given the sensitive nature of the data its customers analyze, moving that data out to a public cloud such as the Amazon EC2 and S3 combo is just not going to happen.

    And even if rugged security could make customers comfortable with that idea, moving large data sets into clouds (as Sun Microsystems discovered with the Sun Grid) is problematic. Even if you can parallelize the uploads of large data sets, it takes time.

    But if you run the applications locally in the SAS cloud, then doing further analysis on that data is no big deal. It's all on the same SAN anyway, locked down locally just as you would do in your own data center.

    Microsoft preps massive cloud expansion

    Microsoft社が自社のCloud Computingプラットホーム、 Azureを自社のデータセンタ数箇所で稼動開始する、と発表。 
    今までは、ワシントン州、Quincy市にあるデータセンタでのみ稼動しているが、これが今後拡張されていく、との事。
     
    さらに、SQLに関するエンハンスが計画されていて、下記の機能強化が発表されている。
    • SQLサポートはSQL Data Servicesという機能でサポート
    • TDS(Tabular Data Systems)のサポート
    • ビジネス分析や解析にSQLを利用できるようなインタフェースのサポート
    • SQLドライバーを経由して、PHP、Ruby、Java等の言語のサポート
    数週間前に起きたAzureの22時間に及ぶシステムダウンは、同社のAzure環境の動的なプロビジョニングやシステムアップグレードを管理するFabric Controllerが起こした障害、という事が明らかにされている。
     

    Microsoft preps massive cloud expansion

    Mix 09 Microsoft will soon start a significant expansion of the hosting capabilities available in its nascent cloud, Azure Services Platform.

    The company told Mix 09 it will start running Azure in multiple North American data centers in the "next few months," so early adopters can chose different locations for their cloud applications and storage. Azure currently runs in Microsoft's Quincy, Washington data center, which packs 1,300 Dell servers into 470,000 square feet.

    "As we move forward in the next couple of months, we will begin running Windows Azure in multiple data centers," Microsoft's James Conrad told Mix attendees. "You will have the ability to run ASP.NET and storage in specific data centers...North American data centers are the start, [and] as we move forward with Azure will make it available in other data centers around the world."

    The expansion is in addition to the inclusion of business application features from SQL Server plus support for non-Microsoft and .NET programming languages such as PHP on Azure.

    The panned roll-out comes as Microsoft tried to explain why Azure went offline for 22 hours at the weekend - Azure's first outage since pre-beta testing began last October. The problem centered on a part of Azure that's central to the system's entire operations: the Fabric Controller.

    The Fabric Controller responded to a network problem by moving users' applications to different servers, Microsoft said. Only, it seems rather like Hal in 2001, the machine, er, took its work a little too seriously. The Fabric Controller thought that all servers were failing and tried to spin up a new server instance for every single server, according to this post.

    The Azure team skipped over the precise details but said: "Because this serial process was taking much too long, we decided to pursue a parallel update process, which successfully restored all applications."

    The Fabric Controller was built by Microsoft to manage resources and load balancing as defined by the user while managing upgrades and availability automatically.

    Microsoft partner and Most Valuable Professional (MVP) Benjamin Day of Benjamin Day Consulting, who's preparing a set of best practices for users of Azure, said it sounded as if those running only single instances of their applications had been most affected as they'd lost their applications entirely. Those running at least two instances only saw performance degraded.

    Day recommended running multiple instances of your application to protect against potential future failures.

    Overall, he called this a "learning opportunity" for Microsoft and said he was glad this had happened during the Community Test Preview (CTP) phase, when a limited number of users were affected. "I'm glad that they had a major failure during the beta rather than discovering this problem after going to production," Day said.

    Problems or not, Microsoft seems determined to press on with its Azure roll-out and will begin adding additional servers, systems, and capabilities for the Fabric Controller to manage while Azure remains in CTP mode. The service is due to go live this year.

    Additional North American data centers are the prelude to worldwide roll-out and will come as Microsoft adds relational functionality from SQL Server to the storage layer of Azure - SQL Data Services (SDS) - in the next CTP.

    Additionally, Microsoft told Mix it's designing the Azure database engine to be able to quickly provision server instances on commodity hardware - such as the Dell servers used in Quincy. The idea is to avoid the need to manually establish new clusters as applications grow.

    The co-located fabric is the first phase in an expansion that'll see Microsoft start to layer in features from its popular SQL Server database. Microsoft has already said relational capabilities would be added to SDS with the addition of Tabular Data System (TDS) in the next Azure CTP.

    At Mix 09, though, the company said its long-term plan is to go beyond that with the addition to the cloud of SQL features for business analytics and reporting.

    As part of the drive to add more capabilities from SQL Server to Azure, Microsoft will also add interoperability with PHP, Ruby, and Java through SQL drivers. Microsoft announced the ability to run PHP applications on Azure with the addition of FastCGI support. Microsoft promised support for additional languages including Perl and Ruby, but did not provide additional details. The company is expected to demonstrate Python and Google App Engine on Azure at Mix. ®

    Salesforce.com: 1,000 Servers at Equinix

    Salesforce.comのAnnual Reportによると、同社の事業はたった1000台のサーバでまかなっている、との事。  この1000台の内、半分は多重化用に用意されているため、事実上、55,000社、150万人の顧客、3000万行のユーザアプリケーション、数百テラバイトのデータが実質500台のサーバで稼動している、ということになる。
     
    これらのサーバはEquinox社が運営をしており、今年、初めての海外サーバとしてSingaporeのホスティングの同社が行う模様。 
     

  • Salesforce.com: 1,000 Servers at Equinix

    March 27th, 2009 : Rich Miller

    TechCrunch noted yesterday that Salesforce.com (CRM) says it runs its entire operation on just 1,000 servers, of which 500 are used to mirror data. "Think about that for a minute," Erick Schonfeld writes. "Salesforce has more than 55,000 enterprise customers, 1.5 million individual subscribers, 30 million lines of third-party code, and hundreds of terabytes of data all running on 1,000 machines. … All of Salesforce relies on data stored in only ten databases that run on about 50 servers."

    But where do these 1,000 servers live? Salesforce.com's operations are concentrated in a single West Coast data center, with customer data mirrored at an East Coast site. Salesforce.com describes its data center operations in a recent SEC filing.

  • Graphic in the Cloud 3: New OnLive service could turn the video game world upside down

    OnLive社という新企業が登場し、オンラインゲームを提供するサービス事業を開始する、と発表。
    主要のゲームメーカもゲームを提供する意思を表明をしており、ゲームコンソールを前提として従来のゲーム市場に大きな変革をもたらす可能性を秘めている。 
     
     
     

    Few startups have a chance to revolutionize an industry. But if entrepreneur Steve Perlman's OnLive lives up to its goals, the company will disrupt the entire video game industry — to the delight of both game publishers and gamers.

    Perlman (right), a serial entrepreneur whose startup credits include WebTV and Mova, says his Palo Alto, Calif.-based company has developed a data compression technology and an accompanying online game service that allows game computation to be done in distant servers, rather than on game consoles or high-end computers. So rather than buying games at stores, gamers could play them across the network — without downloading them.

    Perlman first told me about his plans two years ago. But he managed to keep the whole project secret until today. OnLive plans to show the technology live on Tuesday night at the SF Museum of Modern Art. Over time, Perlman says the company will unveil more interesting projects, features, partners and investors.

    "This is video gaming on demand, where we deliver the games as a service, not something on a disk or in hardware," Perlman said. "Hardware is no longer the defining factor of the game experience."

    [update: See reaction to OnLive from the GDC here.]

    A bunch of major game publishers are backing the idea, which is simple but hard to believe. If you compress game data so much that it can be sent instantaneously over the Internet, then you no longer have to compute that data in a game machine. You can compute the data in a very powerful Internet server and then send the results to be displayed in the home. That's a pretty big earthquake in a $46 billion worldwide industry ruled by three hardware makers who sell powerful consoles.

    The problem with this server-centric approach, which has been talked about for a long time, has always been that the computing power required to process a game has been growing by leaps and bounds, while the ability to compress data hasn't been growing at nearly the same rate. By vastly improving compression and reducing the computing power required to do compression, Perlman has turned the situation around.

    The concept was originally evangelized as the "telecosm" by George Gilder in the pre-bubble days of the 1990s. Gilder thought that the Internet would "hollow out" the PC, meaning that Intel and Microsoft would become less important because their products would become commoditized. If you could spread processing loads across broadband connections, that would obviate the need for a powerful PC in the home. That is, you could do a lot of computing in the centralized Internet server, pass that data over fast Internet pipes, and do very little processing in the client-side computer in the home. Back then, a lot of people felt Gilder was out of touch with reality. Larry Ellison, Oracle's chief, tried selling a Network Computer, but it never got of the ground. The idea has now evolved into cloud computing.

    As consumers began to demand data-heavy software such as video over the Internet or high-end games, people needed more powerful PCs. Fast computers in the home made up for relatively slow broadband connections. But Gilder's idea could now make a comeback, thanks to OnLive's ability to compress data 200-fold, as well as the fact that broadband is more pervasive and the demand for ever more powerful computers has stalled. (We know that last point is true in part because $400 Netbooks, which aren't full-fledged laptops, are selling fast).

    Last week, Perlman showed me a demo of the technology. He was playing Crysis, one of the most demanding 3-D shooting games ever made, running on a simple Mac laptop and also on a rudimentary game console, known as a micro-console, which does almost no computing but merely displays the images on a TV in either standard or 720p high-definition. The graphics ran smoothly.

    OnLive's technology has the potential to move beyond games to the broader level that Gilder was talking about. It could eventually sweep through all forms of entertainment and applications, providing the missing link in helping the Internet take over our living rooms.

    With OnLive, players can join each other in the same multiplayer game, regardless of whether they have a PC, Mac or OnLive's own micro-console (a simple box with minimal processing power) connected to a TV. Such cross-platform game play usually isn't possible.

    Big game publishers and developers — Electronic Arts, THQ, Take-Two Interactive, Codemasters, Eidos, Atari, Warner Bros., Epic Games and Ubisoft — have agreed to distribute their games through the OnLive network, bypassing traditional retail game sales in an effort to reach people who don't buy game consoles or expensive game computers.

    To address naysayers who think this can't be done, given all of the Internet's trade-offs, OnLive will show 16 games being played live on the floor of the Game Developers Conference this week in San Francisco. The game service is expected to be available before the end of the year. If this sounds to you like the interactive TV hogwash of the 1990s, like Time Warner's Full Service Network, it is indeed very similar. The difference this time is that this looks like the real thing.

    Cool game services

    In the live demo, Perlman showed me how the user interface is built around a grid of clickable windows, with any one of them running video clips as previews for what's behind the window. You can select a window to play a particular game or watch a demo. After  playing a game, you can share video with players of the most interesting sequence in your game in a window dubbed "Brag clip." You can also be a spectator and watch how the most skillful players in the rankings play the game. Or you can chat via voice headset with your friends.

    Versions of these social features are available in a few games now. Halo 3, for instance, had the "Brag Up" feature built in. But with OnLive, every game can have these features. OnLive plans to charge users a monthly subscription fee, much like Microsoft charges for its Xbox Live online service. The company can also save publishers a lot of money and share in some of the extra profits.

    "OnLive . . . will be very well received in the marketplace and it is a good fit with our strategy of bringing our games to consumers on the format of their choice," said Kevin Tsujihara, President, Warner Bros. Home Entertainment Group, in a statement. The top executives of THQ, Ubisoft, and Take-Two Interactive offered similar praise for the service.

    Will you ever have to buy a new game console and computer upgrades?

    Mike McGarvey, (top photo, left) chief operating officer of OnLive, said the new technology "breaks the console cycle where a gamer has to buy a new machine every few years." If this happens, the obvious losers are Microsoft, Sony and Nintendo.

    Beyond console makers, makers of PCs and high-end chips will be affected. The micro-console is so simple it has a custom chip but very little else. It is a lightweight box that has a universal serial bus (USB) power connector, a high-definition multimedia interface (HDMI) connector to hook up with a TV, and an Ethernet jack for a broadband connection. You can plug a standard PC game controller or computer mouse into it.

    Losers would include the makers of high-end chips such as Intel, Advanced Micro Devices and Nvidia. However, Perlman noted that Nvidia will benefit to a degree because OnLive's data centers use high-end graphics chips in their servers. Nvidia has been a development partner in helping to create the server technology.

    How does it work?

    The secret sauce in OnLive's technology is its compression algorithms. On the home device, the only software component OnLive needs is a one-megabyte plug-in for a standard Internet browser. That code is enough to decompress the data and display it. Normally, decompression takes much more hardware.

    Perlman hasn't said much about exactly how it works. One clue: the algorithms change the structure and order of Internet data, or packets, so they can sail through the Internet. A packet can make an entire round trip in 80 milliseconds, a very short amount of time compared to other Internet traffic that travels through hardware that either compresses or decompresses the data.

    A lot of people have chased after the Holy Grail of games delivered and played via the internet. But they have been stymied either by slow computing power or by broadband choke points. Infinium Labs promised something similar years ago with its Phantom gaming console, but the company failed. Trion World Network chief executive Lars Buttler has talked about doing server-based games, but it isn't clear if Trion — which hasn't described its technical details — has the same kind of technology OnLive has demonstrated. Trion is building its own games, while OnLive is building a platform for all game makers instead. On Monday, Denis Dyack predicted that cloud computing games are just around the corner, even though he didn't know about OnLive.

    Sony enabled the entire base of PlayStation 3s to be used in Stanford University's Folding@Home project, which takes the spare processing power of connected PS 3s and uses them collectively to solve tough scientific problems. But Sony didn't have the compression technology that OnLive uses to effectively send a lot of data over a broadband connection at very fast rates. So while Sony had the same idea, it didn't have the means to do what OnLive can do.

    To use OnLive, all you need is a broadband connection running at two megabits a second for standard graphics or five megabits a second for high-definition graphics. Those data rates are well within the speeds of most broadband connections. (One report said 71 percent of U.S. homes have two-megabit per second or faster Internet connections). The compression is so good that players can play games even if their homes are as much as 1,000 miles away from the server. For now, OnLive needs only five data center locations to be able to cover the entire country.

    Of course, there are some limitations. The technology isn't quite good enough to be able to do 1080p resolution, which is the highest available on game consoles and TVs. That's because it would require broadband speeds of up to 10 megabits per second. Countries such as Japan have those speeds, but you have to pay a premium for that kind of service in the U.S.

    Fighting game piracy and enabling episodic updates

    One nice side-effect of OnLive's service for game publishers and developers is that it promises to cut down on piracy. Since there's no call for users to download games to their computer or console, there's nothing to copy or steal. Gamers simply buy a subscription or rent a game online. McGarvey estimated that currently for every $60 game sold, game piracy results in $12 in lost revenues for the publisher or developer.

    Game publishers could also frequently update their games on OnLive by changing the code running on the servers. If one part of a game is too hard, the publishers can simply patch that part and then everyone  will play the new version the next time they log in. Publishers can also pull the plug on games that aren't selling well without taking a big inventory hit. And they can add new episodes of popular games quickly, much like the TV networks cancel unpopular shows and add episodes for successful pilots.

    Effects on retailers

    Retailers and purveyors of used games could be cut out of the picture entirely.

    Typically, a game publisher keeps only about $27 for every $60 game sold. Retailers keep $15 and then keep all of the revenues when a used game is resold. About $7 goes to the game console owner. Services such as Valve's Steam cut out the retailer via digital distribution, but it requires powerful computers, fast connections, and lots of download time. McGarvey said that OnLive will dis-intermediate retail. He said that while OnLive will take a cut of what users pay for games on the service, it saves publishers so much money that it can help the game creators raise their profits dramatically.

    OnLive's history

    OnLive has been in stealth mode for seven years. I first learned about it two years ago and have been anxiously waiting to see whether the company pulls of its mission. Perlman said it took a lot of work to get the technology done, but almost everyone he has shown it to has jumped aboard as a partner. The company has received funding from Warner Bros., Maverick Capital and Autodesk.

    Perlman scored big when he sold WebTV, an early Internet appliance, to Microsoft in 1997 for $425 million. He went on to found Rearden, a startup incubator (named after the Hank Rearden character in the Ayn Rand novel Atlas Shrugged) that did deep research and development. His goal was to do the kind of long-term research that venture capitalists and even big companies have shied away from.

    He worked on a set-top box dubbed Moxi and sold that to Paul Allen's Vulcan Ventures. Then he created Mova, which captures human faces with imaging technology so they can easily be turned into animated faces in movies or video games. Mova is now a subsidiary of OnLive, which spun out of Rearden in January, 2007. Mova's facial animation technology was behind the aging of Brad Pitt in the Oscar-winning film The Curious Case of Benjamin Button. At that point in the development, Perlman was convinced the technology would work.

    The company had to do a lot of fundamental work, resulting in more than 100 patents or patent applications. The patent filings amounted to more than 5,000 pages, Perlman said. The company also hired more than 100 people and acquired top talent, including Tom Paquin, executive vice president of engineering and the former engineering head at Netscape. McGarvey also spent 12 years in the video game industry, most recently as chief executive of Eidos, which he sold in 2005.

    It may seem hard to believe. But I've seen it work. Perlman's got a good track record. He has invested heavily, as have some very big media companies. The game publishers are behind him. And he's showing off 16 working games this week. Perlman has a grand plan. But the little pieces of it that he has already shown are going to turn the game industry, and perhaps everything else, upside down.