Jeff Janes wrote: > Kevin Grittner wrote: >> Jeff Janes wrote: >>> So a server that is completely free of user activity will still >>> generate an endless stream of WAL files, averaging one file per >>> max(archive_timeout, checkpoint_timeout). That comes out to one >>> 16MB file per hour (since it is not possible to set >>> checkpoint_timeout > 1h) which seems a bit much when absolutely >>> no user-data changes are occurring. >> BTW, that's also why I wrote the pg_clearxlogtail utility (source >> code on pgfoundry). We pipe our archives through that and gzip >> which changes this to an endless stream of 16KB files. Those three >> orders of magnitude can make all the difference. :-) > > Thanks. Do you put the clearxlogtail and the gzip into the > archive_command, or just do a simple copy into the archive and then > have a cron job do the processing in the archives later? I'm not > really sure what the failure modes are for having pipelines built > into the archive_command. We pipe the file into pg_clearxlogtail | gzip and pipe it out to the archive directory (with a ".gz" suffix), rather than using cp and processing it later. Well, actually, we pipe it to a directory on the same mount point as the archive directory and mv it into place, as part of our scheme to avoid problems with partial files. -Kevin -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general