Search Postgresql Archives

Re: pg_upgrade --jobs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 4/7/19 12:05 PM, senor wrote:
Thank you Adrian. I'm not sure if I can provide as much as you'd need for a definite answer but I'll give you what I have.

The original scheduled downtime for one installation was 24 hours. By 21 hours it had not completed the pg_dump schema-only so it was returned to operation.

So this is more then one cluster?

I am assuming the below was repeated at different sites?

The amount of data per table is widely varied. Some daily tables are 100-200GB and thousands of reports tables with stats are much smaller. I'm not connected to check now but I'd guess 1GB max. We chose to use the --link option partly because some servers do not have the disk space to copy. The time necessary to copy 1-2TB was also going to be an issue.
The vast majority of activity is on current day inserts and stats reports of that data. All previous days and existing reports are read only.
As is all too common, the DB usage grew with no redesign so it is a single database on a single machine with a single schema.
I get the impression there may be an option of getting the schema dump while in service but possibly not in this scenario. Plan B is to drop a lot of tables and deal with imports later.

I take the above to mean that a lot of the tables are cruft, correct?


I appreciate the help.



--
Adrian Klaver
adrian.klaver@xxxxxxxxxxx





[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]

  Powered by Linux