Graham Murray wrote:
Just a thought on the ongoing discussion of dataloss with ext4 vs ext3.
Taking the common scenario:
Read oldfile
create newfile file
write newfile data
close newfile
rename newfile to oldfile
When using this scenario, the application writer wants to ensure that
either the old or new content are present. With delayed allocation, this
can lead to zero length files. Most of the suggestions on how to address
this have involved syncing the data either before the rename or making
the rename sync the data.
What about, instead of 'bringing forward' the allocation and flushing of
the data, would it be possible to instead delay the rename until after
the blocks for newfile have been allocated and the data buffers flushed?
This would keep the performance benefits of delayed allocation etc and
also satisfy the applications developers' apparent dislike of using
fsync(). It would give better performance that syncing the data at
rename time (either using fsync() or automatically) and satisfy the
requirements that either the old or new content is present.
I am not a filesystem developer, so do not know how feasible this would
be.
This has been suggested, I believe. In filesystem terms, it means
inserting a barrier before the rename operation, meaning that any write
operations needed to carry out the rename must not take place until all
write operations from the previous calls have completed.
--
error compiling committee.c: too many arguments to function
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html