Dear all I'd really appreciate a little advice here - I'm designing a PG database to manage a scientific dataset. I've these fairly clear requirements: 1. Multiple users of varying skill will input data. 2. Newly inserted data will be audited and marked good / bad 3. We must have a dataset that is frozen or "known good" to feed into various models. This, as far as I can see, leaves me with three options: A. Two databases, one for transaction processing and one for modelling. At arbitrary intervals (days/weeks/months) all "good" data will be moved to the modelling database. B. One database, where all records will either be marked "in" or "out". The application layer has to exclude all data that is out. C. Sandbox tables for all tables updated by the application. I prefer option A, this gives me the flexibility to run heavy modelling queries on a separate server, but I'm not sure how best to deal with the replication issues when moving to the modelling db. Option B makes me think of hard to diagnose bugs with queries looking at different datasets, for example. With option C, if both tables tX and tY have sandbox tables sX and sY, there could be problems where sX needs to reference data in sY, but has a foreign key referencing tY What would you guys do? Have I missed a better option here? Thanks Will T