On Fri, 3 Jul 2009, Johannes Sixt wrote: > > Do we ever call fsync on a pipe? In this case, fsync should fail with > EINVAL, but your implementation would wait until the reader has drained > all data. I'm pretty sure we never call fsync() on anything but a file. So I suspect the patch for windows is fine - it only needs to care about regular files. That said, there's a reason we don't enable fsync by default - it's generally not needed. I'm not sure what the NTFS crash semantics are, but I _think_ NTFS does everything with a proper log, and then fsync probably doesn't matter on windows. The "fsync on CIFS" was not about "windows is crap, and because it's a windows filesystem we need to fsync" - it was literally because there was a bug in the Linux CIFS implementation that meant that pending writes could be lost if the file was made read-only while they were still pending. And on non-journaling filesystems, fsync is obviously a good idea, but even then the state is often recoverable even in the face of a crash. Now, without fsync, you might have to work at it a bit and know what to do - like _manually_ resetting to the previous state and re-doing the last corrupt commit, throwing out any corrupt objects etc. So enabling fsync can make things _much_ easier for recovery (to the point of making it possible at all for somebody who doesn't know the git object model), but is generally not a hard requirement. Giving windows git the capability to do it sounds like a good idea, though. Keep in mind: enabling fsync _will_ make object creation slower. Most people probably won't care much, though. It really only matters under some fairly rare circumstances - I care, beause it matters for things like "apply 200 patches in one go as fast as you can", but for most 'normal' workflows you'd probably never even notice. Linus -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html