Re: Postgres storage migration

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



In --format=d mode,--file= is the directory where the data files are stored, not regular file.

This works for me:
pg_dump -j${Threads} -Z${ZLvl} -v -C -Fd --file=${BackupDir}/$DB $DB 2> ${DB}_pgdump.log

(Also, why are you just dumping the data, if you need to migrate the whole database?

On Fri, Dec 8, 2023 at 2:05 PM Rajesh Kumar <rajeshkumar.dba09@xxxxxxxxx> wrote:
Pg_dump -a -v -j2 -Z0 -Fd -d dbname -f dumpfilename

On Fri, 8 Dec, 2023, 5:04 PM Ron Johnson, <ronljohnsonjr@xxxxxxxxx> wrote:
On Fri, Dec 8, 2023 at 4:44 AM Rajesh Kumar <rajeshkumar.dba09@xxxxxxxxx> wrote:
Hi

We are using openshift environment and postgres version 15.2. We want to change storage from ceph to local storage. So, Storage team made default storage as local. Now, I created a new cluster with same postgres version and I am planning to take backup from old cluster to new cluster. Size is 100gb. Ram 24gb, cpu 2.

My question is, is there a combination of pg_dump and pg_restore that takes less time to complete the task?

Last time it took more than 8hours. We were taking schema only dump using dumpall . Then data only backup using pg_dump in directory format.

8 hours is really slow for just 100GB of database.  What exact command are you using?

Paul Smith is right, though: Just shut down the instance, copy the files, and start up the instance with new "-D" location.
(Will need to edit postgresql.conf if it defines file locations.)

[Index of Archives]     [Postgresql Home]     [Postgresql General]     [Postgresql Performance]     [Postgresql PHP]     [Postgresql Jobs]     [PHP Users]     [PHP Databases]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Databases]     [Yosemite Forum]

  Powered by Linux