Hi Dave,
About the number of partitions , I didn't have so much problems with hundreds of partitions ( like 360 days in a year ).
Moreover you could bypass the overhead of trigger with a direct insert on the partition, also to have a parallel insert without to firing too much the trigger. Remember to enable the check constraints..
About the number of partitions , I didn't have so much problems with hundreds of partitions ( like 360 days in a year ).
Moreover you could bypass the overhead of trigger with a direct insert on the partition, also to have a parallel insert without to firing too much the trigger. Remember to enable the check constraints..
In my opinion it's better you try to have less rows/partition. How much is the average row length in byte ? If you will have to rebuild indexes , it will be possible , if the partition it's too big, that the maintenance_work_mem will be not enough and you will sort on disk.
I think you have to evaluate also to divide the partitions on different tablespaces so to spread the i/o on different storage types/number ( and so on ) and to manage with different strategy the indexes (it's possible the searches will be different on "historical" partitions and on "live" partitions).Bye
Mat
2013/12/5 Dave Johansen <davejohansen@xxxxxxxxx>
I'm managing a database that is adding about 10-20M records per day to a table and time is a core part of most queries, so I've been looking into seeing if I need to start using partitioning based on the time column and I've found these general guidelines:Don't use more than about 50 paritions ( http://www.postgresql.org/message-id/17147.1271696670@xxxxxxxxxxxxx )
Use triggers to make the interface easier ( https://wiki.postgresql.org/wiki/Table_partitioning#Trigger-based and http://stackoverflow.com/questions/16049396/postgres-partition-by-week )
The only data I found fell inline with what you'd expect (i.e. speeds up selects but slows down inserts/updates http://www.if-not-true-then-false.com/2009/performance-testing-between-partitioned-and-non-partitioned-postgresql-tables-part-3/ )
So I was thinking that partitioning based on month to keep the number of partitions low, so that would mean about 0.5G records in each table. Does that seem like a reasonable number of records in each partition? Is there anything else that I should consider or be aware of?
Thanks,Dave