We have a by-our-standards large table (about 40e6 rows). Since it is the bottleneck in some places, I thought I'd experiment with partitioning. I'm following the instructions here: http://www.postgresql.org/docs/current/static/ddl-partitioning.html The table holds data about certain objects, each of which has an object number and some number of historical entries (like account activity at a bank, say). The typical usage pattern is: relatively rare inserts that happen in the background via an automated process (meaning I don't care if they take a little longer) and frequent querying, including some where a human is sitting in front of it (i.e. I'd like it to be a lot faster). Our most frequent queries either select "all history for object N" or "most recent item for some subset of objects". Because object number figure so prominently, I thought I'd partition on that. To me, it makes the most sense from a load-balancing perspective to partition on the mod of the object number (for this test, evens vs odds, but planning to go up to mod 10 or even mod 100). Lower numbers are going to be queried much less often than higher numbers. This scheme also means I never have to add partitions in the future. I set up my check constraints ((objnum % 2) = 0 and (objnum % 2) = 1 on the relevant tables) and turned constraint_exclusion to 'partition' in postgresql.conf. I also turned it to 'on' in my psql interface. However, when I run an explain or an explain analyze, I still seeing it checking both partitions. Is this because the query planner doesn't want to do a mod? Should I go with simple ranges, even though this adds a maintenance task? -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general