On Thu, 23 Apr 2009, Johannes Schindelin wrote: > > It seems that accessing NTFS partitions with ufsd (at least on my EeePC) > has an unnerving bug: if you link() a file and unlink() it right away, > the target of the link() will have the correct size, but consist of NULs. So I assume that the way ufsd works is that it implements a user-level NTFS driver and then exposes it as a NFS mount over local networking (and perhaps also remotely?) > It seems as if the calls are simply not serialized correctly, as single-stepping > through the function move_temp_to_file() works flawlessly. So presumably there is some cached writes somewhere (a NFS client _should_ not cache writes past a 'close()', but maybe there is a bug there and/or buffering inside ufsd itself that means that the writes are still queued up). And when the unlink() happens, it loses the writes to the original file, and thus to the new one too. If you _don't_ do this patch, does [core] fsyncobjectfiles = true hide the bug? I don't disagree with your patch (apart from the error number games), but I'd like to understand what's going on. I also wonder if we should make that fsync thign be the default. [ That said, I think the http walker and possibly others may be using 'move_temp_to_file()' without going through any of the paths that know about fsync, so 'fsyncobjectfiles' wouldn't fix all cases anyway. ] Hmm. I hate how we have problems with that "link/unlink" sequence, and "rename()" would be much better, but I'd hate overwriting existing objects even _more_, and the normal POSIX rename() behavior is to overwrite any old object. So link/unlink is supposed to be a lot safer, but it's clearly problematic. Linus -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html