On 8/13/24 11:47, Durumdara wrote: > Hello! > > We have to upgrade our server. The PG upgrade is impossible because of > too much data. I'm not sure I understand. What exactly is the problem with the amount of data? How much data are you dealing with, and is the problem in the amount it takes to copy the data, or do you have limited amount of disk space, or what is the issue? I'd say pg_upgrade (in link mode) is probably orders of magnitude faster for upgrading large clusters - probably less than a minute even for huge clusters (multiple TB of data). And it doesn't need more disk space, it does not copy most of the data. Typically, you create a replica on the new system (or clone the fs volume in some way), and then switch over and do the pg_upgrade. pg_dump is pretty expensive - it exports and imports the database, has to rebuild indexes, ... I'd be surprised if this wasn't more laborious and time consuming than the pg_upgrade way. > So we will create a new server with PG 16, and we have to migrate the > databases one by one (from a PG 11 server). > > Is it possible? > > Does PG16 read PG11 dumps and restore them correctly on the PG16 server? > Yes. The docs actually say this: Because pg_dump is used to transfer data to newer versions of PostgreSQL, the output of pg_dump can be expected to load into PostgreSQL server versions newer than pg_dump's version. > Or is it unsupported now? > It's true PG11 is EOL already, but AFAIK the dump/restore should work, we try to support even unsupported versions - I don't recall if the policy is ~10 years, but the docs say: pg_dump can also dump from PostgreSQL servers older than its own version. (Currently, servers back to version 9.2 are supported.) I'd probably use pg_dump from the target version (PG16), but it should not matter. regards -- Tomas Vondra