Shaul Dar <shauldar@xxxxxxxxx> writes: > I have at column that is a bit array of 16, each bit specifying if a certain > property, out of 16, is present or not. Our typical query select 300 > "random" rows (could be located in different blocks) from the table based on > another column+index, and then filters them down to ~50 based on this the > bit field. Currently we have 16 separate indexes built on each bit, and on > our 25M rows table each index takes about 880MB for a total of 14GB! Ouch. One possibility is to replace the bitarray with an integer array (k is in the int[] array iff bit k was set in the bitarray) and then use the GIST or GIN indexing capabilities of contrib/intarray. I also seem to recall having seen a module that provides GIST indexing for bitops on plain integers --- have you looked on pgfoundry? This isn't necessarily better than what you're doing, as btree indexes are a lot better optimized than GIST/GIN. But it would be worth looking into. regards, tom lane -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance