On Mon, Mar 7, 2011 at 8:52 AM, Merlin Moncure <mmoncure@xxxxxxxxx> wrote: > Well, that's a pretty telling case, although I'd venture to say not > typical. In average databases, I'd expect 10-50% range of improvement > going from text->binary which is often not enough to justify the > compatibility issues. Does it justify a 'binary' switch to pg_dump? > I'd say so -- as long as the changes required aren't to extensive > (although you can expect disagreement on that point). hm. i'll take a > look... The changes don't look too bad, but are not trivial. On the backup side, it just does a text/binary agnostic copy direct to stdout. You'd need to create a switch of course, and I'm assuming add a flag isbinary to ArchiveHandle and possibly a stream length to the tocEntry for each table (or should this just be header to the binary stream?). On the restore side it's a bit more complicated -- the current code is a completely text monster, grepping each line for unquoted newline, assuming ascii '0' is the end of the data, etc. You would need a completely separate code path for binary, but it would be much smaller and simpler (and faster!). There might be some other issues too, I just did a cursory scan of the code. merlin -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general