On Thu, Nov 24, 2011 at 03:53:43PM -0500, Ted Ts'o wrote: > On Thu, Nov 24, 2011 at 12:27:11PM -0700, Matthew Wilcox wrote: > > On the other hand, if there was a crash mid-write, they might also get a > > 36k write that actually hit media (right? Or do we guarantee that on > > reboot you see a multiple of 128k?) > > Sure, but in the case the crash we expect things to be in a wonky > state. The problem is if people assume atomic writes to files in a > non-crash case, which has been a traditional Unix/Linux "feature". > It's guaranteed by the standards as much a "close() implies fsync()", > but once application programmers start coding to such assumptions, > they refuse to admit they were wrong, and blame the kernel > programmers. Sure, but resorting to kill -9 is almost the same as pushing the BRS. Nobody's arguing in favour of non-fatal signals interrupting write() [well, Honza was earlier, but we all talked him out of it]. -- Matthew Wilcox Intel Open Source Technology Centre "Bill, look, we understand that you're interested in selling us this operating system, but compare it to ours. We can't possibly take such a retrograde step." -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html