> Jeff King <peff@xxxxxxxx> writes: > > > I think it was this one: > > > > https://lore.kernel.org/git/20160927191955.mympqgylrxhkp24n@xxxxxxxxxxxxxxxxxxxxx/ > > > > I still think it's a good idea, but the hard part is lib-ifying all of > > the functions that call die() to instead return an error up the stack > > (and handle cleanup, etc). Which is why I never really pushed it > > further. But if we're going to be lib-ifying such calls anyway, I think > > it's nice to do this flexible thing (from their perspective it's no more > > work to trigger the callback than it is to call error() and return). > > Yeah, I almost 80%+ agree. Thanks Peff for the pointer. That looks like a good idea, and more comprehensive than my suggested approach. > The remainder of 20% is that I am not so convinced that (fmt_string > + va_list) that isn't pretty much usable for anything but generating > human readable error messages is enough. It is certainly a good > enough replacement for (and much better than) the approach to > prepare an error string in a "struct strbuf err" that was passeed by > the caller, but I am not sure if that is a good bar to aim to pass > to begin with ;-). I think that functions that inform their callers about different errors already do so through return values, but in any case I think that Peff's idea can be extended once we need it (e.g. by adding error reporting functions that can fallback to the string function if the special function is not present). > > That said, I do think for this particular case, your "just run it in a > > sub-process" suggestion may be simpler and more robust. > > For this particular case, with the performance and all, I agree that > the stupid and robust approach would be the best. I'm concerned that we will be painting ourselves into a corner here - I have been appreciating the richer interface that a C call provides us, compared to sub-processes where we have to communicate through a single input stream and a single output stream. For example, "wanted-refs" and packfile URIs (a.k.a. CDN offloading) were made much simpler because it was quite easy to communicate back the updated hashes for refs (for "wanted-refs") and more than one lockfile (for packfile URIs). If we were doing it with "fetch" in remote helpers (like in HTTP protocol v0), we would have to augment the textual protocol to communicate back the updated hashes and the list of lockfiles - doable, but more cumbersome. (That is also why I only implemented those for protocol v2, because protocol v2 had "stateless-connect" for HTTP.) Being limited to sub-processes also stopped me from implementing an improvement to how fetch-pack calls index-pack - when I added debugging information (list of refs and hashes) when writing the .promisor file, what I wanted to do was to pass a callback (function pointer + data pointer) to index-pack to tell it what to write to .promisor, but I couldn't do that. I also couldn't write the list to the stdin of index-pack, because the stdin was already used to communicate the packfile. I could write the list to a temporary file and pass that name to index-pack, but by that stage I thought it would be simpler to just have fetch-pack write the .promisor file itself, even though I think that this should be an index-pack concern, not a fetch-pack one. (Also, I think that debugging within a process is easier than debugging across processes, but that might not be a concern that other people share.) In this particular case, when doing the lazy fetch, we pass a special flag that skips some steps - among other things, negotiation. Admittedly we could add this flag to "fetch" (or whatever command we're going to invoke to implement this sub-process part), or add the equivalent general-purpose flags/configs (e.g. fetch.negotiationAlgorithm=null) (but I haven't looked thoroughly at what flags would be needed). These seem surmountable and I can't think of anything unsurmountable, but I'm still worried that we're painting ourselves into a corner.