On 08/26/2015 10:26 PM, Tory M Blue wrote:
the table is 90GB without indexes, 285GB with indexes and bloat, The row count is not actually completing.. 125Million rows over 13 months, this table is probably close to 600million rows.
You don't need to do SELECT COUNT(*) if you only need an approximate number. You can look at pg_class.reltuples:
SELECT reltuples FROM pg_class WHERE relname = 'impressions'; That should be a sufficiently accurate estimate.
The above is when it had finished copying the table and started on the index.. Well as I said I'm running out of storage as the index is creating some serious data on the filesystem, I'll have to kill it, try to massage the data a bit and increase the maintenance_work mem to use some of my 256GB of ram to try to get through this. Right now the 100% cpu process which is this index is only using 3.5GB and has been for the last 15 hours
Please post details on the configuration (shared_buffer, work_mem, maintenance_work_mem and such).
BTW while the the CREATE INDEX is reporting 3.5GB, it most likely wrote a lot of data into on-disk chunks when sorting the data. So it's actually using the memory through page cache (i.e. don't increase maintenance_work_mem too much, you don't want to force the data to disk needlessly).
regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance