Jeff King <peff@xxxxxxxx> writes: > On Tue, Aug 25, 2015 at 10:28:25AM -0700, Stefan Beller wrote: > >> By treating each object as its own task the workflow is easier to follow >> as the function used in the worker threads doesn't need any control logic >> any more. > > Have you tried running t/perf/p5302 on this? > > I seem to get a pretty consistent 2%-ish slowdown, both against git.git > and linux.git. That's not a lot, but I'm wondering if there is some > low-hanging fruit in the locking, or in the pattern of work being > dispatched. Or it may just be noise, but it seems fairly consistent. The pattern of work dispatch hopefully is the same, no? add_task() does the "append at the end" thing and next_task() picks from the front of the queue. The original is "we have globally N things, so far M things have been handled, and we want a new one, so we pick the M+1th one and do it". The amount of memory that is used to represent a single task may be much larger than the original, with overhead coming from job_list structure and the doubly-linked list. We may not be able to spin up 30 threads and throw a million tasks at them using this, because of the overhead. It would be more suited to handle a pattern in which an overlord actively creates new tasks while worker threads chew them, using the add_task/dispatch as the medium for communication between them. -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html