Taylor Blau <me@xxxxxxxxxxxx> writes: >> @@ -105,7 +108,12 @@ static int send_ref(const char *refname, const struct object_id *oid, >> } >> >> strbuf_addch(&refline, '\n'); >> - packet_write(1, refline.buf, refline.len); >> + >> + packet_buf_write_len(&data->out, refline.buf, refline.len); >> + if (data->out.len >= OUT_WRITE_SIZE) { >> + write_or_die(1, data->out.buf, data->out.len); >> + strbuf_reset(&data->out); This is somewhat unexpected. When one introduces size limit, it is more natural to make it upper bound and arrange the code more like if (currently buffered plus new data exceeds size limit) { flush the currently buffered data; if (new data alone exceeds size limit) flush the new data; else buffer the new data; } else { buffer the new data; } When a single packet exceeds the size limit, you'd end up making an oversize write() call anyway, but otherwise it would keep the write below the limit, not slightly above the limit, which makes it much easier to justify why the limit exists at the level it is set. Also, why is it 32k? Our jumbo packet limit is 65k, I think, and it may probably be easier to explain if we matched it. >> + } > > None of this looks wrong to me, but it might be nice to teach the > packet writer a buffered mode that would handle this for us. It would be > especially nice to bolt the final flush onto packet_flush(), since it is > otherwise easy to forget to do. FWIW, the packet-line part of the system was from the beginning written with an eye to allow buffering until _flush() comes; we may have added some buggy conversation path that deadlocks if we make the non-flush packets fully buffered, so there may need some fixes, but I do not expect the fallout would be too hard to diagnose. It may be worth trying that avenue first before piling on the user level buffering like this patch does. Thanks.