Is it possible to segregate the PITR data by database at any stage? We are - taking regular (daily) snapshots straight from the disk - storing WALs - restoring the snapshot - replaying the WALs My guess is that at snapshot time, I could use oid2name to focus on the database I'm interested in plus core Pg data structures, but then the WAL replay later will fail for transactions that affect other DBs. Is there a better way? Background: The goal here is to have a "web application rewind" facility, based on PITR, that allows a webapp administrator to say "please rewind my Moodle/Drupal/Whatever" to arbitrary point in time X, where X is in the last Y days, via a web UI. It is normally configured to rewind not the master, but a secondary install of your web app -- although there's some interest in rewinding the master too for training and software testing purposes. Making good progress so far. It is using git as the storage mechanism for Pg data and uploaded files, which results in very tight disk footprint. But it does require a single Pg instance for each web app 'master', plus a temporary Pg instance for the rewound install. On shared HW there's a big downside in memory footprint and disk IO to running many Pg instances. cheers, martin -- ----------------------------------------------------------------------- Martin @ Catalyst .Net .NZ Ltd, PO Box 11-053, Manners St, Wellington WEB: http://catalyst.net.nz/ PHYS: Level 2, 150-154 Willis St NZ: +64(4)916-7224 MOB: +64(21)364-017 UK: 0845 868 5733 ext 7224 Make things as simple as possible, but no simpler - Einstein ----------------------------------------------------------------------- ---------------------------(end of broadcast)--------------------------- TIP 5: don't forget to increase your free space map settings