I sat down today and read through the paper (Chang, et al., 2006) about Google's Bigtable system, which allows them to store lots and lots of data (for one table they claim 800 TB), and access it quickly with high availability.
Bigtable is an alternative to a standard relational database: it only holds uninterpreted byte strings indexed by a row and a column key. As far as that goes, it doesn't try to provide an interface a la relational algebra: the interface is just lookups and writes. Furthermore, and very important, it doesn't provide general transactionsâ€”only writes to a given row can be made atomic. Of course, this is all programmers need much of the time. The paper says:
Another lesson we learned is that it is important to delay adding new features until it is clear how the new features will be used. For example, we initially planned to support general-purpose transactions in our API. Because we did not have an immediate use for them, however, we did not implement them. Now that we have many real applications running on Bigtable, we have been able to examine their actual needs, and have discovered that most applications require only single-row transactions. Where people have requested distributed transactions, the most important use is for maintaining secondary indices, and we plan to add a specialized mechanism to satisfy this need. The new mechanism will be less general than distributed transactions, but will be more ef cient (especially for updates that span hundreds of rows or more) and will also interact better with our scheme for optimistic cross-data-center replication.
It doesn't surprise me that a system like the Google crawl doesn't use transactions. After all, Google is mostly a one-way service: they get millions of hits but not millions of people updating their shopping carts. What about a site like eBay, which is constantly taking bids and new listings? In that setting, there will be a lot of writes, and perhaps they have a much larger need for transactions. Here are the bullet points from the presentation "The eBay Architecture" (Shoup and Pritchett, 2006):
- Auto-commit for vast majority of DB writes
- Absolutely no client-side transactions
How do we pull it off?
- Careful ordering of DB operations
- (they also list some recovery techniques which I don't know anything aboutâ€”elided).
- Avoid deadlocks
- Avoid coupling availability
- Update concurrency
- Seamless handling of splits [I assume they mean network splits]
In other words, they do it because they get concurrent updates, which makes the site faster, it avoids problems like deadlocks and network splits (which make the site go really slow) and because they can: in other words, they program around the lack of transactions.
One more point: LiveJournal (claiming 18,000 posts per hour on average) uses MySQL, and was built up to scale before MySQL added transaction support (Fitzpatrick, 2005).
Transactions do have their uses, of course, but I see a consistent pattern: for the biggest websites, transactions are slow and cause too many problems. In spite of the costs, engineers would rather solve their own specific concurrency-control problems than use this general tool.