Thanks a lot. Ok, how about these two examples where ordered data journaling might give different semantics than full journaling mode. Assume ordered data journaling mode in both the examples. Example 1: ---------- 1) A file is opened with O_TRUNC flag. 2) Some data is written into the file. 3) Data is written to its actual location in the disk. 4) Updated inode is written to the journal. 5) Updated inode is written to its actual location. Now if a crash happens between step 3 and 4, one will see jumbled data. Example 2: ---------- 1) A source code file, say foo.c is opened in read-write mode. 2) Its data is updated. 3) Updated data is written to its actual location. 4) Inode's modification time is changed and written to the journal. 5) Modified inode is written to its actual location. Assume that the system crashes between step 3 and 4. The system comes up again and we do a 'make' in the source directory. Now make will not recognize the modified data in foo.c as the foo.c's modification time is not updated. In full journaling mode too the make will not compile foo.c but in that case foo.c will have the original data in it. Are these examples correct ? thanks, Vijayan On Sat, 11 Oct 2003, Stephen C. Tweedie wrote: > Hi, > > On Fri, 2003-10-10 at 17:17, Vijayan Prabhakaran wrote: > > > In ordered data journaling mode, if the system crashes after writing > > the blocks of B but before updating the metadata of A (that is, > > between steps 4 and 5), then A might see spurious data. However this > > problem cannot happen if we have a full journaling mode. > > > > Is the above example correct ? > > No. As long as the delete of file A has not been committed to disk, we > won't allow file B to use the same data blocks, precisely to avoid this > scenario. That's what the "b_committed_data" field of the journal_head > is there for --- it is only used for bitmap blocks, and is there > precisely so that we can spot disk blocks which have been freed but > where the freeing transaction has not yet committed. > > > If not, could you please give me a > > situation where ordered data journaling mode will not give as much > > reliability as the full journaling mode ? > > I don't know of any. Journaled data mode was provided as a performance > option; it's slower for most things but can be faster than ordered mode > for some heavily synchronous workloads involving a lot of data and > metadata updates. But ordered mode was never expected to be any less > correct than journaled-data mode. > > Only the writeback mode offers any less-strong data integrity > guarantees, and even in that mode we don't allow freed blocks to be > overwritten until commit --- if we undo a delete of an existing file on > disk, it always comes back fully intact. Only writes of _new_ files may > be incomplete after a crash in writeback mode, because in that mode we > don't flush the data blocks for newly-allocated files to disk before > committing the inodes and indirect blocks. > > Cheers, > Stephen > > > _______________________________________________ > > Ext3-users@xxxxxxxxxx > https://www.redhat.com/mailman/listinfo/ext3-users > _______________________________________________ Ext3-users@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/ext3-users