Le vendredi 25 octobre 2013 à 04:50 +0200, Andreas a écrit : > > > well, not quite > > We are not talking about files but databases within the db server. > > Lets keep 3 copies total > > the idea is to start with the database db_test today (2013/10/24) > 2013/10/25: rename db_test to db_test_13025 and import the latest > dump into a new db_test > 2013/10/26: rename db_test to db_test_13026 ... import > 2013/10/27: rename db_test to db_test_13027 ... import > 2013/10/28: rename db_test to db_test_13028 ... import > Now we've got db_test and 4 older copies. > Find the oldest copy and drop it. --> drop db_test_131025 > > or better every day drop every copy but the 3 newest. > > and so on > > this needs to be done by an external cron script or probaply by a > function within the postgres database or any other administrative database. > > The point is to give the assistant a test-db where he could mess things up. > In the event he works longer than a day on a task his work shouldn't be > droped completely when the test-db gets automatically replaced. > I assume db_test is created from a dump file? if that's the case, and if your system allows it, using logrotate on the dump is very straithforward; e.g. to rotate an archive everyday, keeping a weekly archive over 52 two weeks, simply create the file /etc/logrotate.d/myapp : #Create rotation for myapp's backups /var/backups/myapp/myapp.gz { weekly missingok rotate 52 notifempty } -- Salutations, Vincent Veyron http://marica.fr/site/demonstration Gestion des contentieux juridiques, des contrats et des sinistres d'assurance -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general