Willy-Bas Loos <willybas@xxxxxxxxx> writes: > So what i don't get is, -if the above is the case- If pg_dump expects to > find an index, it already knows about its existence. Then why does it need > to look for it again? Because what it does is: BEGIN ISOLATION LEVEL REPEATABLE READ; -- run in a single transaction SELECT ... FROM pg_class; -- find out what all the tables are LOCK TABLE foo IN ACCESS SHARE MODE; -- repeat for each table to be dumped after which it runs around and collects subsidiary data such as what indexes exist for each table. But the transaction's view of the catalogs was frozen at the start of the first SELECT. So it can see entries for an index in pg_class and pg_index even if that index got dropped between transaction start and where pg_dump was able to lock the index's table. pg_dump can't tell the index is no longer there --- but some of the backend functions it calls can tell, and they throw errors. There are various ways this might be rejiggered, but none of them entirely remove all risk of failure in the presence of concurrent DDL. Personally I'd recommend just retrying the pg_dump until it succeeds. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general