Hi, On Sun, 2003-10-12 at 01:56, Vijayan Prabhakaran wrote: > Ok, how about these two examples where ordered data journaling > might give different semantics than full journaling mode. > > Assume ordered data journaling mode in both the examples. > > Example 1: > ---------- > > 1) A file is opened with O_TRUNC flag. > 2) Some data is written into the file. > 3) Data is written to its actual location in the disk. > 4) Updated inode is written to the journal. > 5) Updated inode is written to its actual location. > > Now if a crash happens between step 3 and 4, one will see > jumbled data. No. The O_TRUNC flag deallocates the data blocks. Those blocks are just like any other deleted blocks: they will not be reused until the delete commits. If we crash between 3 and 4, the old inode _and_ the old data is intact. > Example 2: > ---------- > > 1) A source code file, say foo.c is opened in read-write mode. > 2) Its data is updated. > 3) Updated data is written to its actual location. > 4) Inode's modification time is changed and written to the journal. > 5) Modified inode is written to its actual location. > > Assume that the system crashes between step 3 and 4. The system > comes up again and we do a 'make' in the source directory. Now make > will not recognize the modified data in foo.c as the foo.c's > modification time is not updated. Correct. Journaled data mode has the side-effect of maintaining a strict order for data writes, both with respect to each other (ie. writes in a given order will always preserve that order after a crash), and with respect to metadata such as timestamps. That's not a data integrity issue, but it is certainly a consistency issue; Unix semantics basically don't give you any consistency guarantees whatsoever unless the application is requesting consistent checkpoints via fsync/O_SYNC etc; but journaled data mode provides extra consistency nonetheless. Cheers, Stephen _______________________________________________ Ext3-users@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/ext3-users