On 12/16/2010 07:22 AM, Olaf van der Spek wrote:
On Thu, Dec 9, 2010 at 1:03 PM, Olaf van der Spek<olafvdspek@xxxxxxxxx> wrote:
Hi,
Since the introduction of ext4, some apps/users have had issues with
file corruption after a system crash. It's not a bug in the FS AFAIK
and it's not exclusive to ext4.
Writing a temp file, fsync, rename is often proposed. However, the
durable aspect of fsync isn't always required and this way has other
issues.
What is the recommended way for atomic non-durable (complete) file writes?
I'm also wondering why FSs commit after open/truncate but before
write/close. AFAIK this isn't necessary and thus suboptimal.
Somebody?
Olaf
Getting an atomic IO from user space down to storage is not really trivial.
What I think you would have to do is:
(1) understand the alignment and minimum IO size of your target storage device
which you can get from /sys/block (or libblkid)
(2) pre-allocate the file so that you do not need to update meta-data for your write
(3) use O_DIRECT write calls that are minimum IO sized requests
Note that there are still things that could break your atomic write - failures
in the storage device firmware, fragmentation in another layer (breaking up an
atomic write into transport sized chunks, etc).
In practice, most applications that need to do atomic transactions use logging
(and fsync()) calls I suspect....
Was this the kind of answer that you were looking for?
Ric
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html