I'm running Postgres on a Redhat Linux 9 server to keep track of testcases we run in our lab. The interface to add new testcases to the DB is cumbersome for adding large groups, well actually it isn't possible really. So on the development Linux machine, when someone writes a new test case and adds it, it is a very manual process. When someone creates a suite of testcases to run, again that process is manual in that they have to select all the testcases to be in the suite. We have a "production" Linux server in our lab that runs the completed test cases and suites. It has an entirely separate Postgres database from the development server, but both have the same schema, etc. Obviously they have very different data sets in the results tables because people are running tests at different times, on different test cases and suites in "production" vs the "development" test box. But since adding testcases is such a manual process, it really sucks whenever new testcases are written to have to get them into the "production" tables that have the test case names, id's, etc. What I have been doing is doing a pg_dump of all the data in the dev database and simply reloading it to the production Pg database. This sucks though in that all the results for the production box are lost and so are any configuration settings we have. Essentially it makes a dump copy of everything at its current state in development. Does anyone have any suggestions for how to get the testcase id's and suite's replicated to the production database with out touching the other tables in there? I would just do a pg_dump on selected tables, but there are sequences in the DB for adding new suites, and testcases and I don't want these getting out of sync. Or can I copy the sequence id's as well so they are updated on the production Pg DB? Thanks ---------------------------(end of broadcast)--------------------------- TIP 1: subscribe and unsubscribe commands go to majordomo@xxxxxxxxxxxxxx