Search Postgresql Archives

ANALYZE on partitioned tables vs on individual partitions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Experts,

Our application has a database with a large number of partitioned tables used to store time series data.

It is partitioned by time: new data is populated into the current partition, old partitions eventually get dropped, and the partitions in between are largely static.

I had not realized until recently that the partitioned tables themselves are not analyzed by the autovacuum processes. I am now wondering if we should be manually analyzing those.

The problem is they are quite enormous and take a long time to analyze - I tested one yesterday and it took ~30mins, and we have hundreds of them. It might not be practical to regularly analyze them.

My questions are: 

  1. What is the consequence of not having good statistics on partitioned table level, if you do have good statistics on the partition level?

  2. Which planning operations rely on partitioned table level statistics?

My reason for asking is I'm trying to understand the tradeoff between the IO cost associated with frequently analyzing as opposed to possibly non-optimal plans being chosen.

Thanks in advance, and sorry if this has all been covered before. I could not find anything definitive in the docs, and while I did find a few references by googling, none really explained the answers to those questions too well.

Regards
Mike


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]

  Powered by Linux