> > would be nice to have an "exclude-table" option on it. I actually > > started working on a patch to allow that, I will make it just good > > enough for my purpose (very poor C skills here). Would that be > > interesting for others ? > > Well, being able to have finer control over what you're dumping is on > the TODO list, and I think there was even consensus reached on -hackers > as to how the syntax should work. > I actually managed to do that, I have a pg_dump which accepts "-e table_name" multiple times, so I can exclude tables from the dump. It's nice to have the source at hand :-) For the tables I do exclude, works fine for me, might have problems in the generic case, e.g. when you have large object references in the excluded table, the referred large objects will still be transferred, and I don't know what happens if there are other tables depending on the excluded ones... in my case that's not the case. > But that's only a partial fix, because generally you'd want a complete > dump of your database anyway. What would be better is if pg_dump could > release locks as it no longer needs them, namely as it dumps each table. > Though this might require pg_dump remembering some state information > about each object since certain things are dumped after all the COPY > commands, such as RI. For my purpose that wouldn't help much, the queue tables are anyway useless in the dump (their job might have been already executed, with external side-effects as emails, and that can't be rolled back, and shouldn't be executed again either). So the queue tables are out of context anyway by the time I load the dump, so I can dump/reload them separately if needed, and I have to filter their content anyway. > This also doesn't address the issue of long-running transactions > preventing dead rows from being vacuumed. But it works fine with CLUSTER, at least as currently implemented. Cheers, Csaba.