Am 20.04.2010 14:42, schrieb Sebastian Schuberth: >> Shouldn't the loop be left in the successful case, too? write(2) is >> allowed to write less than requested, so the caller already needs to >> deal with that case anyway. > > I prefer to make the wrapper as transparent as possible. If a direct > call to write would not write less than requested, the wrapper should > not either. After the call failed, we don't know how many bytes would have been written had it succeeded. But I agree with Albert's reasoning to use the knowledge of a working chunk size in order to minimize the number of write(2) calls. Otherwise we'd have to search for a working size again and again, generating lots of failing calls. > I've updated work/issue-409 in 4msysgit.git accordingly. This patch doesn't help in the test case I cobbled together quickly. It's a Windows XP SP3 client on VMWare mapping a file share exported by a Netapps filer, over a VPN. It's very slow, and I admit that it's a weird setup. I wouldn't actually use it that way, but couldn't find another file share momentarily. I can check out a 1MB file, but checking out a 32MB file fails. I've added a fprintf() to the loop and I can see that it's halving the size and retries, as intended, until it eventually hits zero. The file is created using the correct file size (32MB), though.The first failed write(2) call needs to be undone somehow before we can try again, it seems. Do we have to seek back or truncate the file? Replacing the body of mingw_write() with the following line allows me to check out the 32MB file, by the way: return write(fd, buf, min(count, 1024 * 1024)); René -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html