On Sat, Dec 7, 2013 at 10:09 AM, desmodemone <desmodemone@xxxxxxxxx> wrote: > Hi Dave, > About the number of partitions , I didn't have so much > problems with hundreds of partitions ( like 360 days in a year ). > Moreover you could bypass the overhead of trigger with a direct insert on > the partition, also to have a parallel insert without to firing too much the > trigger. Remember to enable the check constraints.. > In my opinion it's better you try to have less rows/partition. How much is > the average row length in byte ? If you will have to rebuild indexes , it > will be possible , if the partition it's too big, that the > maintenance_work_mem will be not enough and you will sort on disk. > I think you have to evaluate also to divide the partitions on different > tablespaces so to spread the i/o on different storage types/number ( and so > on ) and to manage with different strategy the indexes (it's possible the > searches will be different on "historical" partitions and on "live" > partitions). > Another strategy it's also, not only to create partitions, but to shard data > between more nodes. I agree on the number of partitions. I've run a stats db with daily partitions with about 2 years data in it with no real problems due to high numbers of partitions. Somewhere around 1,000 things start to get slower. I'll add that you can use assymetric partitioning if you tend to do a lot of more fine grained queries on recent data and more big roll up on older ones. I.e. partition by month except for the last 30 days, do it by day etc. Then at the end of the month roll all the days into a month partition and delete them. -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance