On Feb 14, 2:14 am, "filippo" <filippo2...@xxxxxxxxxxx> wrote: > On 13 Feb, 14:54, "filippo" <filippo2...@xxxxxxxxxxx> wrote: > > > My target is to have the backup operation not affecting the users, so > > I want to be able to copy a database even if the database is used by > > someone. > > I could use pg_dump/pg_restore. pg_dump doesn't have to have exclusive > access to database to perform the operation. My only problem is that > pg_dump create a backup on a file, the best to me whould be to have a > perfect clone (users/ data etc) of original database ready to be used > just after the cloning. Is it possible? > > Thanks, > > Filippo Well, I could see you writing a client application that creates a clone by first recreating all the schemas in your database and then copying the data to the clone, and probably quite a bit more, In such a case, since you have absolute control over your client code, you can do anything you want. I am not sure, though, that that is the best use of your time and hardware resources, especially if all you're after is a backup. Just think of all the overhead involved in creating a new clone, and everything that implies, every hour. But why not further explore your backup options if all you're concerned about is a reliable backup. You may find "23.3. On-line backup and point-in-time recovery (PITR)" in the postgresql documentation useful. You haven't given any information about why it might not be appropriate in your situation. If you're really doing what it looks to me like you're doing, then you may be in the beginning stages of reinventing Postgresql's PITR capability. The builtin support for PITR in Postgresql strikes me as sufficient for what you say you need. If you require more, which would imply you want more than the simple backup you say you're after, then defining a suitable suite of triggers and audit tables may serve. Neither should adversely affect your users. especially if your "database is not very big ". HTH Ted