On Tue, Nov 30, 2021 at 02:14:25AM -0500, Jeff King wrote: > - flushing causes us to block. This implies our stdout is connected to > a pipe or socket, and the other side is not expecting to read. A > plausible case here is a client sending us a big input which we find > to be bogus (maybe index-pack checking an incoming pack). We call > die() to complain about the input, but the client is still writing. > In the current code, we'd write out our error and then exit; the > client would get SIGPIPE or a write() error and abort. But with a > flush here, we could block writing back to the client, and now we're > in a deadlock; they are trying to write to us but we are no longer > reading, and we are blocked trying to get out a little bit of > irrelevant stdout data. > > I _think_ we're probably OK here. The scenario above means that the > caller is already doing asynchronous I/O via stdio and is subject to > deadlock. Because the segment of buffer we try to flush here _could_ > have been flushed already under the hood, which would have caused > the same blocking. A careful caller might be using select() or > similar to decide when it is OK to write, but I find it highly > unlikely they'd be using stdio in that case. > > Of the two, the deadlock case worries me more, just because it would be > quiet subtle and racy. As I said, I think we may be OK, but my reasoning > there is pretty hand-wavy. Thinking on this a bit more: I guess as soon as we exit libc would call the equivalent of fflush(NULL) anyway, and try that same flush. So in a sense this is just ordering a bit differently, and not introducing any new problems. (Unless libc is clever enough to avoid blocking, but that doesn't seem like something we could or should rely on in general). -Peff