Shaul Dar wrote:
Hi,
I have at column that is a bit array of 16, each bit specifying if a certain
property, out of 16, is present or not. Our typical query select 300
"random" rows (could be located in different blocks) from the table based on
another column+index, and then filters them down to ~50 based on this the
bit field.
[snip]
> W/o an index this might be overly expensive,
> even as a filter (on selected 300 rows).
Have you _tried_ just not having an index at all? Since you are only
accessing a relatively small number of rows to start with, even an
infinitely efficient index isn't going to make that much difference.
Combine that with the fact that you're going to have the indexes
competing with the table for cache space and I'd see how much difference
it makes just not having it.
Failing that, perhaps have an index on a single bit if there is one you
always/mostly check against.
The relational way to do this would be one or more property tables
joined to your main table. If the majority of your properties are not
set then this could be faster too.
--
Richard Huxton
Archonet Ltd
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance