On Tue, Sep 11, 2012 at 5:36 AM, Kevin Grittner <Kevin.Grittner@xxxxxxxxxxxx> wrote: > Jeff Janes wrote: >> Kevin Grittner wrote: > >>> BTW, that's also why I wrote the pg_clearxlogtail utility (source >>> code on pgfoundry). We pipe our archives through that and gzip >>> which changes this to an endless stream of 16KB files. Those three >>> orders of magnitude can make all the difference. :-) >> >> Thanks. Do you put the clearxlogtail and the gzip into the >> archive_command, or just do a simple copy into the archive and then >> have a cron job do the processing in the archives later? I'm not >> really sure what the failure modes are for having pipelines built >> into the archive_command. > > We pipe the file into pg_clearxlogtail | gzip and pipe it out to the > archive directory (with a ".gz" suffix), rather than using cp and > processing it later. Well, actually, we pipe it to a directory on > the same mount point as the archive directory and mv it into place, > as part of our scheme to avoid problems with partial files. Do you have an example of that which you could share? I've run into two problems I'm trying to overcome. One is that pg_clearxlogtail fails on file formats it doesn't recognize but is asked to archive anyway, such as '000000010000000200000065.00000020.backup', for example. Perhaps it could just issue a warning and then pass the unrecognized file through unchanged, instead of bailing out with a fatal error. The other is that a pipeline in bash reports success even if an interior member of it failed. I think I know how to fix that under an actual shell script (as opposed to a pipeline stuffed into "archive_command"), but would like to see how other people have dealt with it. Thanks, Jeff -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general