"Ram Ravichandran" <ramkaka@xxxxxxxxx> writes: > The problem that I am facing is that EC2 has no persistent storage (at least > currently). So, if the server restarts for some reason, all data on the > local disks are gone. The idea was to store the tables on the non-persistent > local disk, and do the WAL on to an S3 mounted drive. If the server goes > down for some reason, I was hoping to recover by replaying the WAL. I was > hoping that by faking the fsyncs, I would not incur the actual charges from > Amazon until the file system writes into S3. > Also, since WAL is on a separate FS, it will not affect my disk-write > rates. Ahh. I think you can use this effectively but not the way you're describing. Instead of writing the wal directly to persistentFS what I think you're better off doing is treating persistentFS as your backup storage. Use "Archiving" as described here to archive the WAL files to persistentFS: http://postgresql.com.cn/docs/8.3/static/runtime-config-wal.html#GUC-ARCHIVE-MODE Then if your database goes down you'll have to restore from backup (stored in persistentFS) and then run recovery from the archived WAL files (from persistentFS) and be back up. You will lose any transactions which haven't been archived yet but you can control how many transactions you're at risk of losing versus how much you pay for all the "puts". The more "puts" the fewer transactions you'll be putting at risk but the more you'll pay. You can also trade off paying for more frequent "puts" of hot backup images (make sure to read how to use pg_start_backup() properly) against longer recovery times. TANSTAAFL :( If you do this then you may as well turn fsync off on the server since you're resigned to having to restore from backup on a server crash anyways... -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's Slony Replication support!