Am 4/19/2010 22:43, schrieb René Scharfe: > Am 19.04.2010 14:45, schrieb Sebastian Schuberth: >> +#undef write >> +ssize_t mingw_write(int fd, const void *buf, size_t count) >> +{ >> + ssize_t written = 0; >> + size_t total = 0, size = count; >> + >> + while (total < count && size > 0) { >> + written = write(fd, buf, size); >> + if (written < 0 && errno == EINVAL) { >> + // There seems to be a bug in the Windows XP network stack that >> + // causes writes with sizes > 64 MB to fail, so we halve the size >> + // until we succeed or ultimately fail. > > C style comments (/*...*/) are preferred over C++ style comments (//...) > for git. > > Is there a known-good size, or at least a mostly-working one? Would it > make sense to start with that size instead of halving and trying until > that size is reached? > >> + size /= 2; >> + } else { >> + buf += written; >> + total += written; > > What about other errors? You need to break out of the loop instead of > adding -1 to buf and total, right? Thanks for a thorough review. I had the gut feeling that something's wrong with the code due to its structure, but didn't stare at the code long enough to notice this. I suggest to have this structure write if success or failure is not EINVAL return do reduce size if larger than known (presumed?) maximum reduce to that maximum write while not success and failure is EINVAL while not failure and exactly reduced size written write more I don't think that we will observe any short writes *after* the size was reduced, which Albert is concerned about. Somebody who observes the failure that this works around could instrument the function to see whether short writes are really a problem. -- Hannes -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html