On Sun, Dec 26, 2010 at 5:43 PM, Nick Piggin <npiggin@xxxxxxxxx> wrote: >> Do you not understand what is meant by a complete file write? > > It is not a rigourous definition. What I understand it to mean may be > different than what you understand it to mean. Particularly when you > consider what the actual API should look like and interact with the rest > of the apis. f = open(..., O_ATOMIC, O_CREAT, O_TRUNC); write(...); // 0+ times abort/rollback(...); // optional close(f); > OK, so please show how it helps. > > If it is a multi-file/dir archive, then you could equally well come back in > an inconsistent state after crashing with some files extracted and > some not, without atomic-write-multiple-files-and-directories API. True, but at least each file will be valid by itself. So no broken executables, images or scripts. Transactions involving multiple files are outside the scope of this discussion. >> Almost. Visibility to other process should be normal (I don't know the >> exact rules), but commit to disk may be deferred. > > That's pretty important detail. What is "normal"? Will a process > see old or new data from the atomic write before atomic write has > committed to disk? New data. Isn't that the current rule? > Is the atomic write guaranteed to take an atomic snapshot of file > and only specified updates? > > What happens to subsequent atomic and non atomic writes to the > file? It's about an atomic replace of the entire file data. So it's not like expecting a single write to be atomic. >>> Once you solve all those problems, then people will ask you to now >>> solve them for multiple files at once because they also have some >>> great use-case that is surely nothing like databases. >> >> I don't want to play the what if game. > > You must if you want to design a sane API. Providing transaction semantics for multiple files is a far broader proposal and not necessary for implement this proposal. >> Temp file, rename has issues with losing meta-data. > > How about solving that easier issue? That would be nice, but it's not the only issue. I'm not sure, but Ted appears to be saying temp file + rename (but no fsync) isn't guaranteed to work either. There's also the issue of not having permission to create the temp file, having to ensure the temp file is on the same volume (so the rename can work). >> It's simple to implement but it's not simple to use right. > > You do not have the ability to have arbitrary atomic transactions to the > filesystem. If you show a problem of a half completed write after crash, > then I can show you a problem of any half completed multi-syscall > operation after crash. It's not about arbitrary transactions. > The simple thing is to properly clean up such things after a crash, and > just use an atomic commit somewhere to say whether the file operations > that just completed are now in a durable state. Either that or use an > existing code that does it right. That's not simple if you're talking about arbitrary processes and files. It's not even that simple if you're talking about DBs. They do implement it, but obviously that's not usable for arbitrary files. >>> If we start adding atomicity beyond fundamental requirement of >> The zero size issues of ext4 (before some patch). Presumably because >> some apps do open, truncate, write, close on a file. I'm wondering why >> an FS commits between truncate and write. > > I'm still not clear what you mean. Filesystem state may get updated > between any 2 syscalls because the kernel has no oracle or infinite > storage. It just seems quite suboptimal. There's no need for infinite storage (or an oracle) to avoid this. Olaf -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html