> INFO: "repcopy": scanned 3000 of 4652 pages, containing 128964 live rows > and 0 dead rows; 3000 rows in sample, 199980 estimated total rows > VACUUM > controlsmartdb=# select distinct report_status from repcopy ; According to the vacuum output, there are about 200000 rows in the "repcopy" table, occupying roughly 40MB. And according to the explain plan you've posted earlier, there's a seq scan for each row - that gives 200000 sequential scans on the table ... which is about 8TB of data. Sure, most of the data will be read from disk cache / shared buffers etc. but still it's a lot of data to process - that's why it takes so long. I'd recommend creating a index on (dm_user, dm_ip) columns, but it depends on how many different values are in these columns (the more the better). What information do we need to give better recommendations: 1) info about structure of the "repcopy" table (column data types, indexes) 2) info about data (how many different values are there) 3) what does the system do when running the query (use 'top' or 'dstat' to get iowait / CPU / disk / memory etc.) regards Tomas -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general