Re: Massive table (500M rows) update nightmare

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Got an explain analyze of the delete query?

UPDATE mdx_core.audit_impt
SET source_table = 'mdx_import.'||impt_name
WHERE audit_impt_id >= 319400001 AND audit_impt_id <= 319400010
AND coalesce(source_table, '') = ''

Index Scan using audit_impt_pkey on audit_impt (cost=0.00..92.63 rows=1 width=608) (actual time=0.081..0.244 rows=10 loops=1) Index Cond: ((audit_impt_id >= 319400001) AND (audit_impt_id <= 319400010))
 Filter: ((COALESCE(source_table, ''::character varying))::text = ''::text)
Total runtime: 372.141 ms

Hard to tell how reliable these numbers are, because the caches are likely spun up for the WHERE clause - in particular, SELECT queries have been run to test whether the rows actually qualify for the update.

The coalesce may be slowing things down slightly, but is a necessary evil.

--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux