Hello everybody ! I am coming from the (expensive) "Oracle World" and I am a newbie in PG administration. I am currently working on backup concerns... I am using pg_dump and I have not encountered any problems but I have some questions about the internal management of data consistency in PG server. I have read some articles about the MVCC mechanism but I can't see how it handles a consistent "snapshot" of the database during all the export process. If I have well understood, the defaut transaction isolation level in PG is the "read commited" isolation level. If it is the isolation scheme used by pg_dump how can I be sure that tables accessed at the end of my export are consistent with those accessed at the begining ? Does pg_dump use a serializable isolation scheme ? We have this kind of concerns with Oracle and a "CONSISTENT" flag can be set in the exp utility to use a consistent snapshot of the database from the begining to the end of the export process. Unfortunately, this mode use intensively rollback segments and can drive to obsolete data (also knows as "Snapshot too old"). Is there the equivalent of rollback segments in PG ? Is there some issues like "snapshot too old" with intensive multi-users and transactional databases ? I have not a good knowledge of PG internal mechanism, I hope that my questions are clear enough... Florian ---------------------------(end of broadcast)--------------------------- TIP 5: don't forget to increase your free space map settings