* Tom Lane (tgl@xxxxxxxxxxxxx) wrote: > Stephen Frost <sfrost@xxxxxxxxxxx> writes: > > The issue is that pg_dump wants to lock the table against changes, which > > is really to prevent the table to change between "we got the definition > > of the table" and "pulling the records out of the table." It's not > > immediately obvious, to me at least, that there's really any need to > > lock the tables when doing a schema-only dump. Accesses to the catalogs > > should be consistent across the lifetime of the transaction which > > pg_dump is operating in and a schema-only dump isn't doing anything > > else. > > This is the standard mistake about pg_dump, which is to imagine that it > depends only on userspace operations while inspecting schema info. It > doesn't; it makes use of things like ruleutils.c which operate on "latest > available data" rules. Accordingly, no we're not going to skip taking > the table locks. At least not without a ground-up rewrite of that whole > mess, which as you know has been discussed multiple times without anything > useful happening. There's two different points here- the first is the whole discussion around why pg_dump is depending on the backend for bits and pieces but not everything, but the second is- aren't the accesses from ruleutils.c now using an MVCC snapshot? Certainly there's a comment about that happening for pg_get_constraintdef_worker(), and other parts appear to go through SPI, but not everything does. Of particular relevance to this appears to be trigger and index handling, considering that the only thing pg_dump locks is relations anyway, much of the rest isn't relevant. Thanks, Stephen
Attachment:
signature.asc
Description: Digital signature