"Scott Marlowe" <scott.marlowe@xxxxxxxxx> writes: > What is the torn page problem? Note I'm no big fan of compressed file > systems, but I can't imagine them not working with databases, as I've > seen them work quite reliably under exhange server running a db > oriented storage subsystem. And I can't imagine them not being > invisible to an application, otherwise you'd just be asking for > trouble. Invisible under normal operation sure, but when something fails the consequences will surely be different and I can't see how you could make a compressed filesystem safe without a huge performance hit. The torn page problem is what happens if the system loses power or crashes when only part of the data written has made it to disk. If you're compressing or encrypting data then you can't expect the old data portion and the new data portion to make sense together. So for example if Postgres sets a hint bit on one tuple in a block, then writes out that block and the filesystem recompresses it, the entire block will change. If the system crashes when only 4k of it has reached disk then when we read in that block it will fail decompression. And if the block size of the compressed filesystem is larger than the PostgreSQL block size your problems are even more severe. Even a regular WAL-logged write to a database block can cause the subsequent database block to become unreadable if power is lost before the entire set of database blocks within the filesystem block is written. The only way I could see this working is if you use a filesystem which logs data changes like ZFS or ext3 with data=journal. Even then you have to be very careful to make the filesystem block size that the journal treats as atomic match the Postgres block size or you'll still be in trouble. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's Slony Replication support! -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general