On Wed, Nov 04, 2015 at 01:01:53PM -0800, Junio C Hamano wrote: > But the symptom does not have to be as severe as a total deadlock to > be problematic. If we block B (and other tasks) by not reading from > them quickly because we are blocked on reading from A, which may > take forever (in timescale of B and other tasks) to feed us enough > to satisfy strbuf_read_once(), we are wasting resource by spawning B > (and other tasks) early when we are not prepared to service them > well, on both our end and on the other side of the connection. I'm not sure I understand this line of reasoning. It is entirely possible that I have not been paying close enough attention and am missing something subtle, so please feel free to hit me with the clue stick. But why would we ever block reading from A? If poll() reported to us that "A" is ready to read, and we call strbuf_read_once(), we will make a _single_ read call (which was, after all, the point of adding strbuf_read_once in the first place). So even if descriptor "A" isn't non-blocking, why would we block? Only if the OS told us we are ready to read via poll(), but we are somehow not (which, AFAIK, would be a bug in the OS). So I'm not sure I see why we need to be non-blocking at all here, if we are correctly hitting poll() and doing a single read on anybody who claims to be ready (rather than trying to soak up all of their available data), then we should never block, and we should never starve one process (even without blocking, we could be in a busy loop slurping from A and starve B, but by hitting the descriptors in round-robin for each poll(), we make sure they all progress). What am I missing? -Peff -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html