Are you sure this will work correctly for database use at all? The known issue listed at http://www.persistentfs.com/documentation/Release_Notes sounded like a much bigger consistancy concern than the fsync trivia you're bringing up:
"In the current Technology Preview release, any changes to an open file's meta data are not saved to S3 until the file is closed. As a result, if PersistentFS or the system crashes while writing a file, it is possible for the file size in the file's directory entry to be greater than the actual number of file blocks written to S3..."
This sounds like you'll face potential file corruption every time the database goes down for some reason, on whatever database files happen to be open at the time.
Given the current state of EC2, I don't know why you'd take this approach instead of just creating an AMI to install the database into.
--
* Greg Smith gsmith@xxxxxxxxxxxxx http://www.gregsmith.com Baltimore, MD
The problem that I am facing is that EC2 has no persistent storage (at least currently). So, if the server restarts for some reason, all data on the local disks are gone. The idea was to store the tables on the non-persistent local disk, and do the WAL on to an S3 mounted drive. If the server goes down for some reason, I was hoping to recover by replaying the WAL. I was hoping that by faking the fsyncs, I would not incur the actual charges from Amazon until the file system writes into S3.
Also, since WAL is on a separate FS, it will not affect my disk-write rates.
Ram