On 5/27/15 3:39 PM, Steve Atkins wrote:
On May 27, 2015, at 1:24 PM, Wes Vaske (wvaske) <wvaske@xxxxxxxxxx> wrote:
Hi,
I’m running performance tests against a PostgreSQL database (9.4) with various hardware configurations and a couple different benchmarks (TPC-C & TPC-H).
I’m currently using pg_dump and pg_restore to refresh my dataset between runs but this process seems slower than it could be.
Is it possible to do a tar/untar of the entire /var/lib/pgsql tree as a backup & restore method?
If not, is there another way to restore a dataset more quickly? The database is dedicated to the test dataset so trashing & rebuilding the entire application/OS/anything is no issue for me—there’s no data for me to lose.
Dropping the database and recreating it from a template database with "create database foo template foo_template" is about as fast as a file copy, much faster than pg_restore tends to be.
Another possibility is filesystem snapshots, which could be even faster
than createdb --template.
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Data in Trouble? Get it in Treble! http://BlueTreble.com
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance