Ravi Krishna <sravikrishna3@xxxxxxxxx> writes: > So cost wise they both look same, still when i run the sql in a loop > in large numbers, it takes rougly 1.8 to 2 times more than non > partitioned table. If you're testing cases that only involve fetching a single row, the discrepancy could well be down to extra planning time. Proving that the other partitions don't need to be scanned is far from free. It's also worth realizing that for queries that fetch just one or a few rows, it's very unlikely that partitioning can beat an unpartitioned table, period. Basically, partitioning replaces a runtime search of the top levels of a large index with a plan-time proof that other partitions need not be visited. That is not going to be cheaper and could well be a lot more expensive. The situations where partitioning is useful boil down to: 1. You have repetitive, stylized requirements such as "every month, delete all data older than X months" that can be mapped to "drop the oldest partition" instead of doing an expensive table scan. 2. You can arrange things so that certain partitions are accessed far more often than others, thus directing most disk traffic to specific child tables that will remain in RAM cache most of the time. (In principle, you could get similar cache-friendly behavior from a clustered unpartitioned table, but it's usually too hard to ensure that such a table stays clustered.) It does not sound like your test case is exercising either of those win scenarios, and all you're measuring is the overhead of partitioning, which as I said is substantial. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general