Re: [PATCHv3 07/13] fetch_populated_submodules: use new parallel job processing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Stefan Beller <sbeller@xxxxxxxxxx> writes:

> In a later patch we enable parallel processing of submodules, this
> only adds the possibility for it. So this change should not change
> any user facing behavior.
>
> Signed-off-by: Stefan Beller <sbeller@xxxxxxxxxx>
> ---
>  builtin/fetch.c |   3 +-
>  submodule.c     | 119 +++++++++++++++++++++++++++++++++++++++-----------------
>  submodule.h     |   2 +-
>  3 files changed, 87 insertions(+), 37 deletions(-)
>
> diff --git a/builtin/fetch.c b/builtin/fetch.c
> index ee1f1a9..6620ed0 100644
> --- a/builtin/fetch.c
> +++ b/builtin/fetch.c
> @@ -1217,7 +1217,8 @@ int cmd_fetch(int argc, const char **argv, const char *prefix)
>  		result = fetch_populated_submodules(&options,
>  						    submodule_prefix,
>  						    recurse_submodules,
> -						    verbosity < 0);
> +						    verbosity < 0,
> +						    0);

This one...

>  int fetch_populated_submodules(const struct argv_array *options,
>  			       const char *prefix, int command_line_option,
> -			       int quiet)
> +			       int quiet, int max_parallel_jobs)
>  {
> -	int i, result = 0;
> -	struct child_process cp = CHILD_PROCESS_INIT;

... together with this one, could have been made easier to follow if
you didn't add a new parameter to the function.  Instead, you could
define a local variable max_parallel_jobs initialized to 1 in this
function without changing the function signature (and the caller) in
this step, and then did the above two changes in the same commit as
the one that actually enables the feature.

That would be more in line with the stated "only adds the possiblity
for it" goal.

As posted, I suspect that by passing 0 to max_parallel_jobs, you are
telling run_processes_parallel_init() to check online_cpus() and run
that many processes in parallel, no?

> +int get_next_submodule(void *data, struct child_process *cp,
> +		       struct strbuf *err)
> +{
> +	int ret = 0;
> +	struct submodule_parallel_fetch *spf = data;
> ...
> +			child_process_init(cp);
> +			cp->dir = strbuf_detach(&submodule_path, NULL);
> +			cp->git_cmd = 1;
> +			cp->no_stdout = 1;
> +			cp->no_stdin = 1;

In run-commands.c::start_command(), no_stdout triggers
dup_devnull(1) and causes dup2(2, 1) that is triggered by
stdout_to_stderrd to be bypassed.  This looks wrong to me.

> +			cp->stdout_to_stderr = 1;
> +			cp->err = -1;

OK, the original left the standard error stream connected to the
invoker's standard error, but now we are capturing it via a pipe.
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]