2011/2/11 Vitalii Tymchyshyn <tivv00@xxxxxxxxx>: >> My idea as well, though it looks ugly and it would be a maintenance >> head-ache (upgrading the index as new transaction types are added >> would mean "costly" write locks on the table, > > Create new one concurrently. Concurrently? Are there any ways to add large indexes without blocking inserts to the table for the time it takes to create the index? > Yep. Another option could be to add query rewrite as > > select * from ( > select * from account_transaction where trans_type_id =type1 and > account_id=? order by created desc limit 25 union all > select * from account_transaction where trans_type_id =type2 and > account_id=? order by created desc limit 25 union all > ... > union all > select * from account_transaction where trans_type_id =typeN and > account_id=? order by created desc limit 25 > ) a > order by created desc limit 25 I actually considered that. For the test case given it works very fast. Not sure if it would work universally ... it scales well when having extreme amounts of transactions outside the given transaction list (the case we have problems with now), but it wouldn't scale if some user has an extreme amount of transactions within the list. However, I think our "extreme amount of transactions"-problem is mostly limited to the transaction types outside the list. -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance