> Jonathan and I discussed this a little more offline and agreed to leave > the implementation as is. > > Jonathan had suggested "have one callback invocation apply to all hooks > that are running now", either by having the callback iterate over the > task queue or by having the run-command lib take the result from the > callback and have *that* iterate over the task queue. The idea being, > one pointer to one copy of source material is easier to handle than > many. > > I suggested that the callback's implementation of the second version of > that, where the library takes care of the "and do it for each task in > progress" part, would be pretty much identical to the callback's > implementation as it is in this patch, except that as it is here the > context pointer is per-task and as Jonathan suggests the context pointer > is per-entire-hook-invocation - so there isn't much complexity > difference between the two, from the user's perspective. > > We also talked about cases where N=# of hooks > M=# of jobs, that is, > where some hooks must wait for other hooks to finish executing before > that could start. In this case, users' callback implementations would > need to be able to start over from the beginning of the source material, > and a long-running hook would block other short-running hooks from > beginning (because the long-running hook would be confused by hearing > the source material to its stdin again). Yes - this (number of hooks greater than number of jobs allowed to run in parallel) was the case in which my suggestion of not having hook-specific state would not work. The case we were talking about is when there's a large amount of dynamically-generated data to be transmitted to the hooks' stdins and I was thinking that it would be best anyway if the callback looped over all hooks as data was generated, but it would not be possible to only do a single pass if the number of hooks is greater than the number of jobs.