David Rees wrote:
On Fri, Mar 27, 2009 at 5:14 PM, Jeff Garzik <jeff@xxxxxxxxxx> wrote:
Theodore Tso wrote:
OTOH, the really big databases will tend to use direct I/O, so they
won't be dirtying the page cache anyway. So maybe it's not worth the
Not necessarily... From what I understand, a lot of the individual
low-level components in cloud storage, such as GoogleFS's chunk server[1] do
not bypass the page cache, even though they do care about the details of
data caching and data consistency.
PostgreSQL does not use direct I/O, either (except for the
write-ahead-logs which are written sequentially and only get read
during database recovery). I'm sure that most of MySQL's database
engines, also don't.
-Dave
The high end, traditional databases like DB2 and Oracle definitely do
tend to use direct I/O and manage the cache vs not cached pages
carefully on their own.
They also tend to use database "page sizes" larger than our VM page
size or FS block size and work hard to send large, aligned IO's down to
storage in the correct order so they can be fully recoverable after a
crash (no partially updated DB pages, aka "torn pages").
A lot of the cloud storage people rely on whole files. For example, you
implement RAID at the file level by breaking your file down into K
chunks, each one sent over the network to different machines. That chunk
is really a whole file and is sent to disk (hopefully with an fsync()!)
before ack'ing the transaction. They don't worry about data integrity
for objects less than that chunk size.
At least, this is how we did it in Centera - without doing that, you are
definitely open to data loss.
Ric
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html