William Temperley wrote: > Dear all > > I'd really appreciate a little advice here - I'm designing a PG > database to manage a scientific dataset. > I've these fairly clear requirements: > > 1. Multiple users of varying skill will input data. > 2. Newly inserted data will be audited and marked good / bad > 3. We must have a dataset that is frozen or "known good" to feed into > various models. > > This, as far as I can see, leaves me with three options: > A. Two databases, one for transaction processing and one for > modelling. At arbitrary intervals (days/weeks/months) all "good" data > will be moved to the modelling database. > B. One database, where all records will either be marked "in" or > "out". The application layer has to exclude all data that is out. You could also exclude "out" data at the database level with appropriate use of (possibly updatable) views. If you put your raw tables in one schema and put your valid-data-only query views in another schema, you can set your schema search path so applications cannot see the raw tables containing not-yet-validated data. You also have the option of using materialized views, where a trigger maintains the "good" tables by pushing data over from the raw tables when it's approved. That gives you something between your options "A" and "B" to consider, at least. -- Craig Ringer