Patrick Steinhardt <ps@xxxxxx> writes: >> >> > > + fprintf(fetch_head->fp, "%s\t%s\t%s", >> > > + old_oid, merge_status_marker, note); >> > > + for (i = 0; i < url_len; ++i) >> > > + if ('\n' == url[i]) >> > > + fputs("\\n", fetch_head->fp); >> > > + else >> > > + fputc(url[i], fetch_head->fp); >> > > + fputc('\n', fetch_head->fp); >> >> ... these stdio operations for a single record would result in a >> single atomic write() that would not compete with writes by other >> processes. So I wouldn't call "the current design allows ... at the >> same time." It has been broken for years and nobody complained. > > Fair enough. I somehow blindly assumed that `git fetch --all`, which > does invoke `git fetch --append`, did perform the fetch in parallel. If > that's not the case: all the better. The "--all" option started as "one after another, one at a time", but these days goes thru fetch_multiple() where we started to use run_processes_parallel() API without giving it much thought what it would do to the writing of FETCH_HEAD; since around d54dea77 (fetch: let --jobs=<n> parallelize --multiple, too, 2019-10-05), this codepath wrt FETCH_HEAD is utterly broken, I would have to say. > The buffering of a single line is exactly what we're doing now in the > non-atomic case. Previously there had been multiple writes, now we only > call `strbuf_write()` once on the buffer for each reference. Exactly.