Search Postgresql Archives

Re: a back up question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Dec 5, 2017 at 2:52 PM, Martin Mueller <martinmueller@xxxxxxxxxxxxxxxx> wrote:

Are there rules for thumb for deciding when you can dump a whole database and when you’d be better off dumping groups of tables? I have a database that has around 100 tables, some of them quite large, and right now the data directory is well over 100GB. My hunch is that I should divide and conquer, but I don’t have a clear sense of what counts as  “too big” these days. Nor do I have a clear sense of whether the constraints have to do with overall size, the number of tables, or machine memory (my machine has 32GB of memory).

 

Is 10GB a good practical limit to keep in mind?



​I'd say the rule-of-thumb is if you have to "divide-and-conquer" you should use non-pg_dump based backup solutions.  Too big is usually measured in units of time, not memory.​

Any ability to partition your backups into discrete chunks is going to be very specific to your personal setup.  Restoring such a monster without constraint violations is something I'd be VERY worried about.

David J.


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]

  Powered by Linux