"Tsing, Vadim" wrote: > We are trying to select a database engine for a new product we plan > to start developing. One of the requirements is to be able to ship > the data on the media and install it on the user's hard-drive. > > One of the issues we've run into is that pg_restore takes a lot of > time to restore large database. > > Also, moving entire data directory is not an option since we will > have multiple databases that user can choose to install. > > We've tried to copy the database directory and it seems to work. > Unfortunately we are not sure if there is any, not visible, damage > to the data. > > So, here is my question: Is it possible to: > > 1. Create a database > > 2. Stop postgres > > 3. Empty out the directory created by the new database > > 4. Copy files from another database (created on a different > server) into that directory > > 5. Start postgres and use the database > > Once this is completed we may need to repeat the process for > another database. This is not supported; you are playing with fire. > what are other options? A couple with come to mind: (1) Create a separate PostgreSQL cluster for each database, so that each has its own data directory -- this is safe at that level, provided the runtime environments are compatible. (2) Again, assuming a compatible runtime environment, copy in the data directory for all of them, but with tablespaces pointing back to the removable medium. Of course, this likely only works if the large part of the data is read-only and fits on one something which can be mounted at run time. The other thing to look at is whether there is some way to adjust your schema to allow faster restore. Which phase is causing you a problem? Have you profiled the restore while it is running? -Kevin -- Sent via pgsql-admin mailing list (pgsql-admin@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin