On Thu, May 26, 2011 at 11:35:51AM -0700, Junio C Hamano wrote: > The caller in index_stream() reads what it could, writes what it read, and > comes back and makes another call to read_in_full(), at which point either > it gets an error and the whole thing would error out (i.e. no difference > from before), or if it was an transient error that interrupted the > previous read_in_full(), it can keep reading (with this patch it will not > have a chance to do so). The problem is that most callers are not careful enough to repeatedly call read_in_full and find out that there might have been an error in the previous result. They see a read shorter than what they asked, and assume it was EOF. But even if we assume all callers are careful and want to handle these transient errors, then: 1. What sort of transient errors are we talking about? We already handle retrying after EAGAIN and EINTR via xread. 2. If we get a non-transient error, are we guaranteed to get the same error if we make some other syscalls and then call read() again? Otherwise we are masking it. But really, it just seems like a non-intuitive interface to me (as evidenced by the number of callers who _didn't_ get it right). If a caller like index_stream is really interested in reading and processing some data up to a certain size, shouldn't it just be using xread directly? -Peff -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html