List,
I am in need to copy a production PostgreSQL server data( 1 TB) to an external storage( Say USB Hard Drive) and need to set up a backup server with this data dir.
What is the trivial method to achieve this ??
1. Is Sqldump an option at a production server ?? ( Will this affect the server performance and possible slowdown of the production server ? This server has a high IOPS). This much size 1.2 TB will the Sqldump support ? Any bottlenecks ?
Whether or not there will be bottlenecks depends on how busy (CPU and disk load) the current server is.
2. Is copying the data directory from the production server to an external storage and replace the data dir at a backup server with same postgres version and replace it's data directory with this data dir copy is a viable option ?
# cp -r ./data /media/mydb_backup ( Does this affect the Production database server performance ??) due to the copy command overhead ?
OR doing a WAL Replication Configuration to a standby is the right method to achieve this ??
But you say you can't establish a network connection outside the DC.
This is to take out the database backup outside the Datacenter and our DC policy won't allow us to establish a network connection outside the DC to a remote location for WAL replication .
If you're unsure of what Linux distro & version and Postgresql version that you'll be restoring the database to, then the solution is:
DB=the_database_you_want_to_backup
THREADS=<some_number>
cd $PGDATA
cp -v pg_hba.conf postgresql.conf /media/mydb_backup
cd /media/mydb_backup
pg_dumpall --globals-only > globals.sql
pg_dump --format=d --verbose --jobs=$THREADS $DB &> ${DB}.log
If you're 100% positive that the system you might someday restore to is exactly the same distro & version, and Postgresql major version, then I'd use PgBackRest.
--
Born in Arizona, moved to Babylonia.
Born in Arizona, moved to Babylonia.