Re: [PATCH] submodule: implement `module_name` as a builtin helper

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Junio C Hamano <gitster@xxxxxxxxx> writes:

>>> ... if
>>> you really want to go the "thread" route, the first thing to try
>>> would be to see if a few places we already use threads for
>>> parallelism (namely, "grep", "pack-objects", "preload-index" and
>>> "index-pack") can be factored out and model your new API around the
>>> commonality among them.
>
> And obviously, doing your pool API around threads will allow you to
> throw future per-thread function that do not involve run_command()
> at all at your API, and it will make it easy to adapt the current
> threaded parts of the system to the API.

Just a few random thoughts before going to bed and going offline for
the weekend...

Eventually, we would want to do "submodule update" of a top-level
project that has 500 submodules underneath, but obviously we would
not want to blindly spawn 500 threads, each of which runs "fetch",
all at the same time.  We'd want to limit the parallelism to a sane
limit (say, 16 or 32), stuff 500 work units to a queue, from which
that many number of worker bees grab work unit one by one to process
and then come back to ask for more work.

And we would eventually want to be able to do this even when these
500 submodules are spread across multiple levels of nested
submodules (e.g. top-level may have 8 submodules, and they have 16
nested subsubmodules each on average, each of which may have 4
nested subsubsubmodules on average).  Specifying -j16 at the top
level and apportioning the parallelism to recursive invoation of
"submodule update" in such a way that the overall process is
efficient and without waste would be a bit tricky.

In such a nested submodule case, we may want to instead try to
enumerate these 500 submodules upfront with unbounded parallelism
(e.g. the top-level will ask 4 worker bees to process immediate 8
submodules, and they each spawn 4 worker bees to process their
immediate 16 submodules, and so on---it is unbounded because we do
not know upfront how deep the nesting is).

Let's call that a recursive module_list.  You would want out of a
recursive module_list:

 - the path to the submodule (or "." for the top-level) to indicate
   where in the nested hierarchy the information came from;

 - the information the flat module_list gives for that location.

Since you already have module_list() function natively callable from
C and also it is available via "git submodule--helper module_list",
implementing a recursive module_list would be a good first proof of
concept exercise for your "thread pool" engine.  You can employ the
"dual implementation" trick to call

 - a version that tells the thread to run the native C version of
   module_list(),

 - another version that tells the thread to run_command()
   "submodule--helper module_list" in the top-level and nested
   submodules.

and collect and compare their results and performance.

That will not just be a good proof of concept for the pool
implementation.

Once you have such a recursive module_list, you can use it as a way
to easily obtain such a "unified view" list of all submodules.  That
can be used to stuff a flat work unit queue to implement reasonably
bounded parallelism.

Your recursive "submoule update" implementation could be

 - Run recursive module_list to stuff the work queue with these 500
   submodules (possibly spread across in top-level and in nested
   submodules, or all 500 in the flat top-level);

 - Start N worker bees, and tell them to pick from that work queue,
   each element of which tells them to process which submodule that
   resides in where (either in the top-level project or in a
   submodule).

And each work element would essentially be to run "git fetch" in
that submodule directory.

Hmm...


--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]