On Thu, Aug 16, 2012 at 2:54 PM, Wells Oliver <wellsoliver@xxxxxxxxx> wrote: > Hey folks, a question. We have a table that's getting large (6 million rows > right now, but hey, no end in sight). It's wide-ish, too, 98 columns. > > The problem is that each of these columns needs to be searchable quickly at > an application level, and I'm far too responsible an individual to put 98 > indexes on a table. Wondering what you folks have come across in terms of > creative solutions that might be native to postgres. I can build something > that indexes the data and caches it and runs separately from PG, but I > wanted to exhaust all native options first. I submit that you're far better off working with the users to see which fields they really need indexes on, and especially which combinations of fields with functional and / or partial indexes serve them the best. To start with you can create indexes willy nilly if you want and then use the pg_stat*index tables to see which are or are not getting used and start pruning them as time goes by. But keep an eye out for long running queries with your logging and investigate to see what specialized indexes might help the most for those queries. Often a simple index on (a,b) where x is not null or something can give great improvements over any bitmap hash scans of multiple indexes ever could, especially on large data sets. -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general