On Thu, Dec 19, 2013 at 12:54 PM, Sev Zaslavsky <sevzas@xxxxxxxxx> wrote: > On 12/19/2013 3:34 PM, Sergey Konoplev wrote: >> On Thu, Dec 12, 2013 at 9:30 AM, Sev Zaslavsky <sevzas@xxxxxxxxx> wrote: >>> Table rt_h_nbbo contains several hundred million rows. All rows for a >>> given >>> entry_date are appended to this table in an overnight process every night >>> - >>> on the order of several million rows per day. >> >> Do you perform a regular cleaning of the table with DELETEs or may be >> you use UPDATEs for some another reason? > > At this point we're neither deleting nor updating the data once written to > the db. Than I can see two reasons of the problem: 1. The indexed data is too big and index search is getting worth day by day I would try to create a partial index for one day and repeat the EXPLAIN ANALYZE with this day. If there will be some significant improvements then I would start creating partial indexes for every new day before it starts and drop them after some time when they became obsolete. 2. You are limited with IO I would also suggest you to upgrade your storage in this case. -- Kind regards, Sergey Konoplev PostgreSQL Consultant and DBA http://www.linkedin.com/in/grayhemp +1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979 gray.ru@xxxxxxxxx -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance