On 10/25/07, tfinneid@xxxxxxxxxxxxxxxxxxxxx <tfinneid@xxxxxxxxxxxxxxxxxxxxx> wrote: > > Alvaro Herrera <alvherre@xxxxxxxxxxxxxxxxx> writes: > >> tfinneid@xxxxxxxxxxxxxxxxxxxxx wrote: > >>> I did a test previously, where I created 1 million partitions (without > >>> data) and I checked the limits of pg, so I think it should be ok. > > > >> Clearly it's not. > > > > You couldn't have tested it too much --- even planning a query over so > > many tables would take forever, and actually executing it would surely > > have run the system out of locktable space before it even started > > scanning. > > And this is the testing, so you're right.... > > Its only the select on the root table that fails. Operations on a single > partitions is no problem. Not sure I understand exactly what you're saying. Are you selecting directly from the child table, or from the parent table with constraint_exclusion turned on? If you're hitting the child table directly, you aren't actually using partitioning. It's a wholly independent table at that point. If you're hitting a single child table through the parent table via constraint_exclusion, then you are using partitioning, but only hitting on physical table. But hitting the parent table with no constraining where clause is a recipe for disaster. The very reason to use partitioning is so that you never have to scan through a single giant table. Anyway, you're heading off into new territory with 55,000 partitions. What is the average size, in MB of one of your partitions? I found with my test, there was a point of diminishing returns after 400 or so partitions at which point indexes were no longer needed, because the average query just seq scanned the partitions it needed, and they were all ~ 16 or 32 Megs. ---------------------------(end of broadcast)--------------------------- TIP 6: explain analyze is your friend