> On 27 Feb 2017, at 10:58, Jeff King <peff@xxxxxxxx> wrote: > > On Sun, Feb 26, 2017 at 07:48:16PM +0100, Lars Schneider wrote: > >> +If the request cannot be fulfilled within a reasonable amount of time >> +then the filter can respond with a "delayed" status and a flush packet. >> +Git will perform the same request at a later point in time, again. The >> +filter can delay a response multiple times for a single request. >> +------------------------ >> +packet: git< status=delayed >> +packet: git< 0000 >> +------------------------ >> + > > So Git just asks for the same content again? I see two issues with that: > > 1. Does git have to feed the blob content again? That can be expensive > to access or to keep around in memory. > > 2. What happens when the item isn't ready on the second request? I can > think of a few options: > > a. The filter immediately says "nope, still delayed". But then > Git ends up busy-looping with "is this one ready yet?" > > b. The filter blocks until the item is ready. But then if other > items _are_ ready, Git cannot work on processing them. We lose > parallelism. > > c. You could do a hybrid: block until _some_ item is ready, and > then issue "delayed" responses for everything that isn't > ready. Then if you assume that Git is looping over and over > through the set of objects, it will either block or pick up > _something_ on each loop. > > But it makes a quadratic number of requests in the worst case. > E.g., imagine you have N items and the last one is available > first, then the second-to-last, and so on. You'll ask N times, > then N-1, then N-2, and so on. I completely agree - I need to change that. However, the goal of the v2 iteration was to get the "convert" interface in an acceptable state. That's what I intended to say in the patch comment section: "Please ignore all changes behind async_convert_to_working_tree() and async_filter_finish() for now as I plan to change the implementation as soon as the interface is in an acceptable state." > > I think it would be much more efficient to do something like: > > [Git issues a request and gives it an opaque index id] > git> command=smudge > git> pathname=foo > git> index=0 > git> 0000 > git> CONTENT > git> 0000 > > [The data isn't ready yet, so the filter tells us so...] > git< status=delayed > git< 0000 > > [Git may make other requests, that are either served or delayed] > git> command=smudge > git> pathname=foo > git> index=1 > git> 0000 > git< status=success > git< 0000 > git< CONTENT > git< 0000 > > [Now Git has processed all of the items, and each one either has its > final status, or has been marked as delayed. So we ask for a delayed > item] > git> command=wait > git> 0000 > > [Some time may pass if nothing is ready. But eventually we get...] > git< status=success > git< index=0 > git< 0000 > git< CONTENT > git< 0000 > > From Git's side, the loop is something like: > > while (delayed_items > 0) { > /* issue a wait, and get back the status/index pair */ > status = send_wait(&index); > delayed_items--; > > /* > * use "index" to find the right item in our list of files; > * the format can be opaque to the filter, so we could index > * it however we like. But probably numeric indices in an array > * are the simplest. > */ > assert(index > 0 && index < nr_items); > item[index].status = status; > if (status == SUCCESS) > read_content(&item[index]); > } > > and the filter side just attaches the "index" string to whatever its > internal queue structure is, and feeds it back verbatim when processing > that item finishes. That could work! I had something like that in mind: I teach Git a new command "list_completed" or similar. The filter blocks this call until at least one item is ready for Git. Then the filter responds with a list of paths that identify the "ready items". Then Git asks for these ready items just with the path and not with any content. Could that work? Wouldn't the path be "unique" to identify a blob per filter run? Thanks, Lars