On Mon, Nov 14, 2016 at 1:09 PM, Lars Schneider <larsxschneider@xxxxxxxxx> wrote: > Hi, > > Git always performs a clean/smudge filter on files in sequential order. > Sometimes a filter operation can take a noticeable amount of time. > This blocks the entire Git process. > > I would like to give a filter process the possibility to answer Git with > "I got your request, I am processing it, ask me for the result later!". > > I see the following way to realize this: > > In unpack-trees.c:check_updates() [1] we loop through the cache > entries and "ask me later" could be an acceptable return value of the > checkout_entry() call. The loop could run until all entries returned > success or error. Late to this thread, but here is an answer nevertheless. I am currently working on getting submodules working for working tree modifying commands (prominently checkout, but also read-tree -u and any other caller that uses the code in unpack-trees.) Once the submodules are supported and used, I anticipate that putting the files in the working tree on disk will become a bottle neck, i.e. the checkout taking way too long for an oversized project. So in the future we have to do something to make checkout fast again, which IMHO is threading. My current vision is to have checkout automatically choose a number of threads based on expected workload, c.f. preload-index.c, line 18-25. > The filter machinery is triggered in various other places in Git and > all places that want to support "ask me later" would need to be patched > accordingly. I think this makes sense, even in a threaded git-checkout. I assume this idea is implemented before threading hits checkout, so a question on the design: Who determines the workload that is acceptable? >From reading this email, it seems to be solely the filter that uses as many threads/processes as it thinks is ok. Would it be possible to enhance the protocol further to have Git also mingle with the workload, i.e. tell the filter it is allowed to use up (N-M) threads, as it itself already uses M out of N configured threads? (I do not want to discuss the details here, but only if such a thing is viable with this approach as well) Thanks, Stefan