On Sat, Oct 17, 2009 at 1:02 PM, Vikul Khosla <vkhosla@xxxxxxxxxxxx> wrote: > > Thanks Greg!. > > Yes, we do need to query on all 3000 values ... potentially. Considering > that when we changed the B-Tree indexes to Bitmap indexes in Oracle > we saw a huge performance boost ... doesn't that suggest that absence of > this > feature in PG is a constraint ? Maybe, but it's hard to speculate since you haven't provided any data. :-) Are you running PG on the same hardware you used for Oracle? Have you tuned postgresql.conf? What is the actual runtime of your query under Oracle with a btree index, Oracle with a bitmap index, and PostgreSQL with a btree index? It's not immediately obvious to me why a bitmap index would be better for a case with so many distinct values. Seems like the bitmap would tend to be sparse. But I'm just hand-waving here, since we have no actual performance data to look at. Keep in mind that PostgreSQL will construct an in-memory bitmap from a B-tree index in some situations, which can be quite fast. That begs the question of what the planner is deciding to do now - it would be really helpful if you could post some EXPLAIN ANALYZE results. > Are there any other clever workarounds to boosting performance involving > low queries on low cardinality columns ? i.e avoiding a full table scan ? Here again, if you post the EXPLAIN ANALYZE results from your queries, it might be possible for folks on this list to offer some more specific suggestions. If you query mostly on this column, you could try clustering the table on that column (and re-analyzing). ...Robert -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance