Search Postgresql Archives

Re: dealing with file size when archiving databases

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On Jun 20, 2005, at 10:28 PM, Andrew L. Gould wrote:

compressed database backups is greater than 1GB; and the results of a
gzipped pg_dumpall is approximately 3.5GB.  The processes for creating
the iso image and burning the image to DVD-R finish without any
problems; but the resulting file is unreadable/unusable.

I ran into this as well. Apparently FreeBSD will not read a large file on an ISO file system even though on a standard UFS or UFS2 fs it will read files larger than you can make :-).

What I used to do was "split -b 1024m my.dump my.dump-split-" to create multiple files and burn those to the DVD. To restore, you "cat my.dump.split.?? | pg_restore" with appropriate options to pg_restore.

My ultimate fix was to start burning and reading the DVD's on my MacOS desktop instead, which can read/write these large files just fine :-)


Vivek Khera, Ph.D.
+1-301-869-4449 x806



---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

              http://www.postgresql.org/docs/faq

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux