Page II: Google's vice-president of engineering was in London this week to talk to potential recruits about just what lies behind that search page.
Obviously it would be impractical to run the algorithm once every page for every query, so Google splits the problem down.
When a query comes in to the system it is sent off to index servers, which contain an index of the Web. This index is a mapping of each word to each page that contains that word. For instance, the word 'Imperial' will point to a list of documents containing that word, and similarly for 'College'. For a search on 'Imperial College' Google does a Boolean 'AND' operation on the two words to get a list of what Hölzle calls 'word pages'.
"We also consider additional data, such as where in the page does the word occur: in the title, the footnote, is it in bold or not, and so on.
Each index server indexes only part of the Web, as the whole Web will not fit on a single machine -- certainly not the type of machines that Google uses. Google's index of the Web is distributed across many machines, and the query gets sent to many of them -- Google calls each on a shard (of the Web). Each one works on its part of the problem.
Google computes the top 1000 or so results, and those come back as document IDs rather than text. The next step is to use document servers, which contain a copy of the Web as crawled by Google's spiders. Again the Web is essentially chopped up so that each machine contains one part of the Web. When a match is found, it is sent to the ad server which matches the ads and produces the familiar results page.
Google's business model works because all this is done on cheap hardware, which allows it to run the service free-of-charge to users, and charge only for advertising.
The hardware
"Even though it is a big problem", said Hölzle, "it is tractable, and not just technically but economically too. You can use very cheap hardware, but to do this you have to have the right software."
Google runs its systems on cheap, no-name IU and 2U servers -- so cheap that Google refers to them as PCs. After all each one has a standard x86 PC processor, standard IDE hard disk, and standard PC reliability -- which means it is expected to fail once in three years.
On a PC at home, that is acceptable for many people (if only because they're used to it), but on the scale that Google works at it becomes a real issue; in a cluster of 1,000 PCs you would expect, on average, one to fail every day. "On our scale you cannot deal with this failure by hand," said Hölzle. "We wrote our software to assume that the components will fail and we can just work around it. This software is what makes it work.
One key idea is replication. "This server that contains this shard of the Web, let's have two, or 10," said Hölzle. "This sounds expensive, but if you have a high-volume service you need that replication anyway. So you have replication and redundancy for free. If one fails you have 10 percent reduction in service so no failures so long as the load balancer works. So failure becomes and a manageable event."
In reality, he said, Google probably has "50 copies of every server". Google replicates servers, sets of servers and entire data centres, added Hölzle, and has not had a complete system failure since February 2000. Back then it had a single data centre, and the main switch failed, shutting the search engine down for an hour. Today the company mirrors everything across multiple independent data centres, and the fault tolerance works across sites, "so if we lose a data centre we can continue elsewhere -- and it happens more often than you would think. Stuff happens and you have to deal with it."
A new data centre can be up and running in under three days. "Our data centre now is like an iMac," said Schulz." You have two cables, power and data. All you need is a truck to bring the servers in and the whole burning in, operating system install and configuration is automated."
Working around failure of cheap hardware, said Hölzle, is fairly simple. If a connection breaks it means that machine has crashed so no more queries are sent to it. If there is no response to a query then again that signals a problem, and it can cut it out of the loop.
That is redundancy taken care of, but what about scaling? The Web grows every year, as do the number of people using it, and that means more strain on Google's servers.