Search Postgresql Archives

Re: dealing with file size when archiving databases

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Monday 20 June 2005 09:53 pm, Tom Lane wrote:
> "Andrew L. Gould" <algould@xxxxxxxxxxx> writes:
> > I've been backing up my databases by piping pg_dump into gzip and
> > burning the resulting files to a DVD-R.  Unfortunately, FreeBSD has
> > problems dealing with very large files (>1GB?) on DVD media.  One
> > of my compressed database backups is greater than 1GB; and the
> > results of a gzipped pg_dumpall is approximately 3.5GB.  The
> > processes for creating the iso image and burning the image to DVD-R
> > finish without any problems; but the resulting file is
> > unreadable/unusable.
>
> Yech.  However, I think you are reinventing the wheel in your
> proposed solution.  Why not just use split(1) to divide the output of
> pg_dump or pg_dumpall into slices that the DVD software won't choke
> on?  See notes at
> http://developer.postgresql.org/docs/postgres/backup.html#BACKUP-DUMP
>-LARGE
>
> 			regards, tom lane

Thanks, Tom!  The split option also fixes the problem; whereas my 
"solution", only delays the problem until a table gets too large.  Of 
course, at that point, I should probably use something other than 
DVD's.

Andrew Gould

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux