My first ever newsgroup PostgreSQL question... I want to move data between some very large databases (100+ gb) of different schema at our customer sites. I cannot expect there to be much working partition space, so the databases cannot exist simultaneously. I am also restricted to hours, not days, to move the data. I felt that pg_dump/pg_restore with the compressed format would do the job for me. I was able to create a modified pg_dump program without any difficulty. But I need to customize the speedy COPY FROM and COPY TO commands to perform my necessary schema and data content changes. I tried copying /src/backend/copy.c/h to my customized pg_dump project, renamed them and their DoCopy function, and added it to my makefile. This created conflicts between libpq-fe.h and libpq.h. For example: postgresql-7.4.13/src/interfaces/libpq/libpq-fe.h:191: error: conflicting types for `PQArgBlock' postgresql-7.4.13/src/include/libpq/libpq.h:39: error: previous declaration of `PQArgBlock' Is it possible to compile-link together frontend pg_dump code with backend code from copy.c? If not, how can I go about customizing pg_dump to have low-level control over the speedy but inflexible COPY TO/FROM commands? I already tried all this with regular sql calls and it was unnacceptably far too slow. thanks -Lynn