On Mon, 1 Jun 2009, Shaul Dar wrote:
Our typical query select 300 "random" rows (could be located in different blocks) from the table based on another column+index, and then filters them down to ~50 based on this the bit field.
So it seems that you're already using an index to fetch 300 rows from a big table, and then filtering that down to ~50 based on the über-complex stuff.
That's the right way to do it. There isn't really an appropriate place to add another index into this query plan. Filtering 300 rows is peanuts for Postgres.
You quite probably won't get any benefit from having a bitwise index, unless you can make a multi-column index with the existing index stuff first and then the bitwise stuff as a second column. However, that sounds like more effort than benefit.
If I have my analysis wrong, perhaps you could post your EXPLAIN ANALYSE results so we can see what you mean.
Matthew -- What goes up must come down. Ask any system administrator. -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance