tuanhoanganh <hatuan05@xxxxxxxxx> wrote: > I have postgresql 9.0.1 http://www.postgresql.org/support/versioning > 6GB ram > work_mem = 2097151 I think that has the potential to push you into swapping: cc=> set work_mem = 2097151; SET cc=> show work_mem; work_mem ----------- 2097151kB (1 row) That's 2GB, and that much can be allocated, potentially several times, per connection. > -> Index Scan using sym_data_pkey on sym_data d > (cost=0.00..637148.72 rows=3129103 width=1403) > (actual time=71.989..55643.665 rows=3124631 loops=1) > Filter: ((channel_id)::text = 'sale_transaction'::text) This index scan is going to randomly access all tuples in the table's heap. That is probably going to be much slower than a sequential scan. It is apparently choosing this index to avoid a sort, because of the mis-estimation on the number of rows. Is it critical that the rows be returned in that order? If not, you might see much faster performance by leaving off the ORDER BY clause so that it can use the seqscan. You could potentially make queries like this much faster by indexing on channel_id, or by indexing on data_id WHERE channel_id = 'sale_transaction'.. You could also set up optimization barriers with clever use of a CTE or an OFFSET 0 to force it to use a seqscan followed by a sort, but I would look at the other options first. -Kevin -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance