Search Postgresql Archives

dealing with file size when archiving databases

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I've been backing up my databases by piping pg_dump into gzip and 
burning the resulting files to a DVD-R.  Unfortunately, FreeBSD has 
problems dealing with very large files (>1GB?) on DVD media.  One of my 
compressed database backups is greater than 1GB; and the results of a 
gzipped pg_dumpall is approximately 3.5GB.  The processes for creating 
the iso image and burning the image to DVD-R finish without any 
problems; but the resulting file is unreadable/unusable.

My proposed solution is to modify my python script to:

1. use pg_dump to dump each database's tables individually, including 
both the database and table name in the file name; 
3. use 'pg_dumpall -g' to dump the global information; and
4. burn the backup directories, files and a recovery script to DVD-R.

The script will pipe pg_dump into gzip to compress the files.

My questions are:

1. Will 'pg_dumpall -g' dump everything not dumped by pg_dump?  Will I 
be missing anything?
2. Does anyone foresee any problems with the solution above?

Thanks,

Andrew Gould

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux