We just tar/gzip the entire data directory. It takes all of 20 sec. We've successfully restored from that also. The machine you are restoring to *must* be running the save version of postgresql you backed up from.
Matthew Engel
Jeff Davis <pgsql@xxxxxxxxxxx>
Sent by: pgsql-general-owner@xxxxxxxxxxxxxx 10/16/2006 02:35 PM |
|
On Mon, 2006-10-16 at 16:29 +0530, Gandalf wrote:
> I am looking for a *fast* backup/restore tools for Postgres. I've
> found the current used tools pg_dump and pg_restore to be very slow on
> large databases (~30-40GB). Restore takes time in the tune of 6 hrs on
> a Linux, 4 proc, 32 G RAM machine which is not acceptable.
>
> I am using "pg_dump -Fc" to take backup. I understand binary
> compression adds to the time, but there are other databases (like DB2)
> which take much less time on similar data sizes.
>
> Are there faster tools available?
>
http://www.postgresql.org/docs/8.1/static/backup-online.html
With that backup system, you can backup with normal filesystem-level
tools (e.g. tar) while the database is online.
Make sure to backup the remaining active WAL segments. Those are
necessary for the backup to be complete. This step will be done
automatically in 8.2.
If your filesystem has snapshot capability, you have nothing to worry
about. Just snapshot the fs and backup the data directory plus any WAL
segments and tablespaces.
Regards,
Jeff Davis
---------------------------(end of broadcast)---------------------------
TIP 5: don't forget to increase your free space map settings