On Mon, 16 Apr 2007, Junio C Hamano wrote: > > Just for the record, I do not think anybody during that #git > discussion actually proved that http-push was the culprit. It > is a very plausible working conjecture, though. I looked at http-push.c once more, and there is a very marked lack of any error testing. It actually tries to be pretty careful, ie it seems that every PUT request is always to a temp-file, and then it does a MOVE request after that, and things seem to properly abort on most errors, but the actual data integrity is obviously impossible to check on the remote, and a quick grep showed that not all errors even set "aborted", which would seem to imply that certain error conditions can happen without the http-push then aborting the ref update. For example, if "start_active_slot()" fails, aborted isn't generally set. I don't know if that is ever a problem (it can only trigger with USE_CURL_MULTI), but it's an example of what looks pretty fragile. So we can fix up some of these kinds of things, but considering that we can't really validate the end result on the remote, I'd still personally be quite leery of pushing by http.. > I think the fetch side does the right thing, more or less, by > downloading to a temporary file and using move_temp_to_file() > after validating the SHA-1 matches. Yeah, on the pulling side we are simply much better off, because we can validate things after the operation has finished. On the pushing side, we could obviously try to re-download the objects or something, but basically validation would literally have to involve doubling the network usage, and even then we might get screwed by some caching layer! Linus - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html