My question is for Francisco who replied regarding xz. I was curious what options he used. Thanks.
On Fri, Oct 16, 2015 at 3:14 PM, Adrian Klaver <adrian.klaver@xxxxxxxxxxx> wrote:
On 10/16/2015 12:10 PM, anj patnaik wrote:
Thanks. what is the recommended command/options for backup and how to
restore?
I found the below online. let me know if this is better and how to
restore. Thank you
pg_dump -Fc '<Db-Name>' | xz -3 dump.xz
Again, why would compress an already compressed output?
Also online:
http://www.postgresql.org/docs/9.4/interactive/app-pgdump.html
http://www.postgresql.org/docs/9.4/interactive/app-pgrestore.html
They step you through the backup and restore process.
On Fri, Oct 16, 2015 at 4:05 AM, Francisco Olarte
<folarte@xxxxxxxxxxxxxx <mailto:folarte@xxxxxxxxxxxxxx>> wrote:
On Fri, Oct 16, 2015 at 8:27 AM, Guillaume Lelarge
<guillaume@xxxxxxxxxxxx <mailto:guillaume@xxxxxxxxxxxx>> wrote:
> 2015-10-15 23:05 GMT+02:00 Adrian Klaver <adrian.klaver@xxxxxxxxxxx <mailto:adrian.klaver@xxxxxxxxxxx>>:
>> On 10/15/2015 01:35 PM, anj patnaik wrote:
...
>>> ./pg_dump -t RECORDER -Fc postgres | gzip > /tmp/dump
>>> Are there any other options for large tables to run faster and occupy
>>> less disk space?
>> Yes, do not double compress. -Fc already compresses the file.
> Right. But I'd say "use custom format but do not compress with pg_dump". Use
> the -Z0 option to disable compression, and use an external multi-threaded
> tool such as pigz or pbzip2 to get faster and better compression.
Actually I would not recommend that, unless you are making a long term
or offsite copy. Doing it means you need to decompress the dump before
restoring or even testing it ( via i.e., pg_restore > /dev/null ).
And if you are pressed on disk space you may corner yourself using
that on a situation where you do NOT have enough disk space for an
uncompressed dump. Given you normally are nervous enough when
restoring, for normal operations I think built in compression is
better.
Also, I'm not current with the compressor Fc uses, I think it still is
gzip, which is not that bad and is normally quite fast ( In fact I do
not use that 'pbzip2', but I did some tests about a year ago and I
found bzip2 was beaten by xz quite easily ( That means on every level
of bzip2 one of the levels of xz beat it in BOTH size & time, that was
for my data, YMMV ).
Francisco Olarte.
--
Adrian Klaver
adrian.klaver@xxxxxxxxxxx