Hi!
I have a database "Customer" with about 60Gb of data.
I know I can backup and restore, but this seems too slow.
Is there any other option to duplicate this database as "CustomerTest" as fast as possible (even fastar than backup/restore) - better if in one operation (something like "copy database A to B")?
I would like to run this everyday, overnight, with minimal impact to prepare a test environment based on production data.
Hum, I don't know exactly how to do it, but on Linux, you could put the "Customer" database in a tablespace which resides on a BTRFS filesystem. BTRFS can do a quick "snapshot" of the filesystem and you can then set things for "incremental backup", as talked about here: https://btrfs.wiki.kernel.org/index.php/Incremental_Backup . From some reading, btrfs is a performance dog compared to others.
interesting take using various filesystems for PostgreSQL: http://www.slideshare.net/fuzzycz/postgresql-on-ext4-xfs-btrfs-and-zfs
another on btrfs + PostgreSQL http://www.cybertec.at/2015/01/forking-databases-the-art-of-copying-without-copying/
<quote from above>
...
So we managed to take fork a 15 GB database in 6 seconds with only a small hiccup in performance. We are ready to start up the forked database.
...
<quote/>
I got a number of hits searching on "postgresql btrfs" using Google search.
Thanks,
--
Atenciosamente,
Edson Carlos Ericksson Richter
How many surrealists does it take to screw in a lightbulb? One to hold the griffon and one to fill the bathtub with brightly colored LEDs.
Maranatha! <><
John McKown