Re: Guidance Needed for PostgreSQL Upgrade from 12 to 15

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



pg_dump and pg_restore only work on a single database, but they contain EVERYTHING that database needs.  Thus, you'll have to do a pg_dump/pg_restore pair 

If tasked with migrating multiple databases, and LR was not an option, I would:

On the new server, 
1. "pg_dumpall -h $OldServer --globals > globals.sql" to get the roles, etc.  Will need to remove the "postgres" role.
2. "pg_dump -h $OldServer -j$Threads -Fd $DB ..." for each database.
3. psql postgres -af globals.sql
4. "pg_restore -v -j $Threads --exit-on-error -cC -Fd --no-tablespaces -d postgres $DB" for each database.
5. vacuumdb --analyze --jobs=$(nproc) -d $DB

Do all the pg_dump and then all the pg_restore commands, or alternate pg_dump/pg_restore pairs, one database at a time.  That's up to you.

I would set these config params before each Step4:
pg_ctl restart -wt9999 -mfast \
        -o "-c hba_file=$PGDATA/pg_hba_maintmode.conf" \
        -o "-c fsync=off" \
        -o "-c log_statement=none" \
        -o "-c log_temp_files=100kB" \
        -o "-c log_checkpoints=on" \
        -o "-c log_min_duration_statement=120000" \
        -o "-c shared_buffers=${SharedBuffs}GB" \
        -o "-c maintenance_work_mem=${MaintMem}GB" \
        -o "-c synchronous_commit=off" \
        -o "-c archive_mode=off" \
        -o "-c full_page_writes=off" \
        -o "-c checkpoint_timeout=${CheckPoint}min" \
        -o "-c max_wal_size=${MaxWalSize}GB" \
        -o "-c wal_level=minimal" \
        -o "-c max_wal_senders=0" \
        -o "-c wal_buffers=${WalBuffs}MB" \
        -o "-c autovacuum=off" 

And then this after Step 5:
pg_ctl -wt9999 stop -mfast && pg_ctl -wt9999 start

Be careful with what you set ${SharedBuffs} and ${MaintMem} to: with lots of threads, it's easy to run out of memory, and then the oom will kill the pg_restore.

On Thu, Nov 21, 2024 at 9:55 PM Motog Plus <mplus7535@xxxxxxxxx> wrote:
Hi Ron,

Thanks for your response l.
Logical replication i can't use because of primary key issues.
I have multiple DBs, then multiple schemas and then multiple tables, functions etc
Will I be able to copy whole cluster including data, roles, globals everything using pg_dump?
Also while restoring, can I restore everything, the same structure, in a single go using pg_restore?


On Fri, Nov 22, 2024, 00:16 Ron Johnson <ronljohnsonjr@xxxxxxxxx> wrote:
pg_dumpall uses a single thread to generate one big .sql file.  If your database(s) are small enough that generating (even a compressed) .sql file, copying it then importing it is Good Enough, then that's fine.

Otherwise, "logical replication" or "multithreaded pg_dump" is what you want.

On Thu, Nov 21, 2024 at 1:17 PM Motog Plus <mplus7535@xxxxxxxxx> wrote:
Dear Team,
Thank you for your valuable inputs on the PostgreSQL upgrade.
Given the challenges encountered with pg_upgrade in a Kubernetes environment, we're considering a more traditional approach involving pg_dumpall to take backup and then restore the data using psql utility.
Can you please advise if this approach will be fine or you see any issues with it?

A high level overview of the steps:

1. Backup:
   * Connect to the existing PostgreSQL 12 pod.
   * Execute pg_dumpall to create a complete database dump.

 2. New Deployment:
   * Create a new PostgreSQL 16 pod.
   I think no need to use initidb as it will be autoinitialized .

 3. Restore:
   * Transfer the backup file to the new pod.
   * Use psql utility to restore the database from the dump.

 4. Verification:
   * Thoroughly test the restored database to ensure data integrity and functionality.

 5. Cutover:
   * Once verification is complete, switch over traffic to the new PostgreSQL 16 pod.
   * Delete the old PostgreSQL 12 pod.

Best Regards,
Ramzy

On Wed, Nov 20, 2024, 02:47 Scott Ribe <scott_ribe@xxxxxxxxxxxxxxxx> wrote:
> On Nov 19, 2024, at 1:40 PM, Kris Deugau <kdeugau@xxxxxxxxx> wrote:
>
> I stand corrected.  I hadn't read the docs on pg_upgrade for quite a while, but after reading the last section in https://www.postgresql.org/docs/current/pgupgrade.html:
>
> "If you did not start the new cluster, the old cluster was unmodified except that, when linking started, a .old suffix was appended to $PGDATA/global/pg_control. To reuse the old cluster, remove the .old suffix from $PGDATA/global/pg_control; you can then restart the old cluster."
>
> I see what you mean.
>

There's nothing wrong per se about taking the snapshot before, I was just saving the potential time of re-running pg_upgrade. Heck, take a snapshot before *and* after ;-)


--
Death to <Redacted>, and butter sauce.
Don't boil me, I'm still alive.
<Redacted> lobster!

[Index of Archives]     [Postgresql Home]     [Postgresql General]     [Postgresql Performance]     [Postgresql PHP]     [Postgresql Jobs]     [PHP Users]     [PHP Databases]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Databases]     [Yosemite Forum]

  Powered by Linux