Stephen, On 12/2/05 1:19 PM, "Stephen Frost" <sfrost@xxxxxxxxxxx> wrote: > >> I've used the binary mode stuff before, sure, Postgres may have to >> convert some things but I have a hard time believing it'd be more >> expensive to do a network_encoding -> host_encoding (or toasting, or >> whatever) than to do the ascii -> binary change. > > From a performance standpoint no argument, although you're betting that you > can do parsing / conversion faster than the COPY core in the backend can (I > know *we* can :-). It's a matter of safety and generality - in general you > can't be sure that client machines / OS'es will render the same conversions > that the backend does in all cases IMO. One more thing - this is really about the lack of a cross-platform binary input standard for Postgres IMO. If there were such a thing, it *would* be safe to do this. The current Binary spec is not cross-platform AFAICS, it embeds native representations of the DATUMs, and does not specify a universal binary representation of same. For instance - when representing a float, is it an IEEE 32-bit floating point number in little endian byte ordering? Or is it IEEE 64-bit? With libpq, we could do something like an XDR implementation, but the machinery isn't there AFAICS. - Luke