On 25 Okt, 17:36, tfinn...@xxxxxxxxxxxxxxxxxxxxx wrote: > > The design is based on access patterns, i.e. one partition represents a > group of data along a discrete axis, so the partitions are the perfect for > modeling that. Only the last partition will be used on normal cases. The > previous partitions only need to exists until the operator deletes them, > which will be sometime between 1-6 weeks. This has been interesting reading because I'm working on a system which involves a more batch-oriented approach in loading the data, where I've found partitions to be useful both from a performance perspective (it looks like my indexes would be inconveniently big otherwise for the total volume of data) and from an administrative perspective (it's convenient to control the constraints for discrete subsets of my data). However, if all but the most recent data remains relatively stable, why not maintain your own statistics for each partition or, as someone else suggested, use the pg_class statistics? I'd just be interested to hear what the best practices are when tables get big and where the access patterns favour the most recently loaded data and/or reliably identifiable subsets of the data, as they seem to in this case and in my own case. The various tuning guides out there have been very useful, but isn't there a point at which partitioning is inevitable? Paul ---------------------------(end of broadcast)--------------------------- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match