On 8 April 2014 15:56, Andrew W. Gibbs <awgibbs@xxxxxxxxxxx> wrote:
* I'm not even sure what a good solution is for guaranteeing that
transmitted files have been durably persisted using common tools;
it doesn't seem that commonly available rsync implementations
support a "please call fsync" option, though some Googling yields
discussion of a patched version that someone created for such
purposes; maybe I could invoke the shell command "sync" as part of
the dance, but that doesn't seem that great either, since the
first transfer is happening on the master database and I don't
want to issue a sync request to all file systems as that will kill
database performance, and the second transfer is happening via
rsync and you wouldn't be able to call "sync" until "rsync" had
already deleted the source files, thus creating a race condition
For copying to the same machine I'm using dd instead of cp because dd can fsync just that one file. Something like this:
dd if="$path" of="$localstore/$file" bs=8192 conv=fsync,excl
dd if="$path" of="$localstore/$file" bs=8192 conv=fsync,excl
I don't have a good solution for copying to the remote site. The only thing that pops to my mind that doesn't require configuration with the C compiler is NFS. It's supposed to use fsync on the remote site, so it should do the trick. Most Linux distributions cheat and don't have NFS configured in this way out of the box, but you should be able to configure it to be reliable.
I've never tried this and don't recommend doing it if you don't know what you're doing.