Hi, I'm working on an extraction tool that should unload timestamp-ranged data from every table of a selected schema under postgresql database, and store its result into a file so it can be loaded into any database. I don't know what tables I'm going to dump, but I do know what fields should be taken in count when constraining the data selection by timestamp range. So I just: - Create a temporary schema ('temp') with the exact desired ('public' in my case) schema structure (no data) (like: pg_dump with -s flag) - Access the database and retrieve every defined table that is defined in 'pg_tables' table and is under the 'public' schema. - For each retrieved table, I build up a SELECT statement. For example: "SELECT * FROM table1" - Then, for each table, I take each field and compare it to the ones which I know I have to filter by. If there is a match, I add " WHERE " and then, for each field " fieldX BETWEEN 'date1' AND 'date2' OR ". - And last, add ";". Then, the generated SELECTs are run against the 'public' schema and its results are inserted into the temporary schema for each retrieved table that should, of course, be defined in the new temp schema. After this, all the data should be perfectly in 'temp' schema, so I can dump it to a file using pg_dump. NOOOOWWW... The problem comes: I want the data to be loadable into any database which has to have, of course, the same tables defined, so I was thinking about using the pg_dump with '-d' flag which dumps the data with INSERTs sql commands, instead of PostgreSQL-specific COPY command. The downside of this is that it takes too much more space than COPY. So I'm trying to find another way, which would be generating a CSV file with the values. But since pg_dump doesn't have a csv export option, I don't how to do it, unless I struggle doing it by myself. The related questions would be: What format is the most generic one? How can I generate that file format? Any ideas? I'm developing this in Perl using DBI. I hope you find this as interesting as I do. Daniel