Hi, On Fri, 2003-10-10 at 17:17, Vijayan Prabhakaran wrote: > In ordered data journaling mode, if the system crashes after writing > the blocks of B but before updating the metadata of A (that is, > between steps 4 and 5), then A might see spurious data. However this > problem cannot happen if we have a full journaling mode. > > Is the above example correct ? No. As long as the delete of file A has not been committed to disk, we won't allow file B to use the same data blocks, precisely to avoid this scenario. That's what the "b_committed_data" field of the journal_head is there for --- it is only used for bitmap blocks, and is there precisely so that we can spot disk blocks which have been freed but where the freeing transaction has not yet committed. > If not, could you please give me a > situation where ordered data journaling mode will not give as much > reliability as the full journaling mode ? I don't know of any. Journaled data mode was provided as a performance option; it's slower for most things but can be faster than ordered mode for some heavily synchronous workloads involving a lot of data and metadata updates. But ordered mode was never expected to be any less correct than journaled-data mode. Only the writeback mode offers any less-strong data integrity guarantees, and even in that mode we don't allow freed blocks to be overwritten until commit --- if we undo a delete of an existing file on disk, it always comes back fully intact. Only writes of _new_ files may be incomplete after a crash in writeback mode, because in that mode we don't flush the data blocks for newly-allocated files to disk before committing the inodes and indirect blocks. Cheers, Stephen _______________________________________________ Ext3-users@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/ext3-users