Am 14.02.2012 04:34, schrieb Junio C Hamano: > Heiko Voigt <hvoigt@xxxxxxxxxx> writes: > >> diff --git a/submodule.c b/submodule.c >> index 3c714c2..ff0cfd8 100644 >> --- a/submodule.c >> +++ b/submodule.c >> @@ -411,6 +411,54 @@ int check_submodule_needs_pushing(unsigned char new_sha1[20], >> return needs_pushing->nr; >> } >> >> +static int push_submodule(const char *path) >> +{ >> + if (add_submodule_odb(path)) >> + return 1; >> + >> + if (for_each_remote_ref_submodule(path, has_remote, NULL) > 0) { >> + struct child_process cp; >> + const char *argv[] = {"push", NULL}; >> + >> + memset(&cp, 0, sizeof(cp)); >> + cp.argv = argv; >> + cp.env = local_repo_env; >> + cp.git_cmd = 1; >> + cp.no_stdin = 1; >> + cp.dir = path; >> + if (run_command(&cp)) >> + return 0; >> + close(cp.out); >> + } >> + >> + return 1; >> +} > > Hmm, this makes me wonder if we fire subprocesses and have them run in > parallel (to a reasonably limited parallelism), it might make the overall > user experience more pleasant, and if we did the same on the fetching > side, it would be even nicer. Yeah, I had the same idea and did some experiments when working on fetch some time ago. > We would need to keep track of children and after firing a handful of them > we would need to start waiting for some to finish and collect their exit > status before firing more, and at the end we would need to wait for the > remaining ones and find how each one of them did before returning from > push_unpushed_submodules(). If we were to do so, what are the missing > support we would need from the run_command() subsystem? We would not only have to collect the exit status but also the output lines. You don't want to see the output of multiple fetches or pushes mixed together, so it makes sense to just defer that until the command exited and then print everything at once. The interesting part I couldn't come up with an easy solution for is to preserve the output order between the stdout and stdin lines, as they contain different parts of the progress which would look strange when shuffled around. And I saw that sometimes parallel fetches took way longer than doing them sequentially (in my case because of strange DNS behavior of my DSL router), so we would definitely want a config option for that (maybe setting the maximum number of simultaneous threads to be used). But don't get me wrong, I'm all for having that feature! :-) -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html