On Thu, Dec 12, 2013 at 9:30 AM, Sev Zaslavsky <sevzas@xxxxxxxxx> wrote: [...] > Table rt_h_nbbo contains several hundred million rows. All rows for a given > entry_date are appended to this table in an overnight process every night - > on the order of several million rows per day. [...] > I perceive an inefficiency here and I'd like your input as to how to deal > with it: The end result of the query is 1631 rows which is on the order of > about a couple hundred Kb of data. Compare that to the amount of I/O that > was done: 1634 buffers were loaded, 16Mb per page - that's about 24 Gb of > data! Query completed in 21 sec. I'd like to be able to physically > re-organize the data on disk so that the data for a given product_id on a > entry_date is concentrated on a few pages instead of being scattered like I > see here. Do you perform a regular cleaning of the table with DELETEs or may be you use UPDATEs for some another reason? -- Kind regards, Sergey Konoplev PostgreSQL Consultant and DBA http://www.linkedin.com/in/grayhemp +1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979 gray.ru@xxxxxxxxx -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance