Re: [PATCH 3/3] docs, parallelism: Rearrange how jobserver reservations are made

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 21/11/2019 01.03, Kees Cook wrote:
> Rasmus correctly observed that the existing jobserver reservation only
> worked if no other build targets were specified. The correct approach
> is to hold the jobserver slots until sphinx has finished. To fix this,
> the following changes are made:
> 
> - refactor (and rename) scripts/jobserver-exec to set an environment
>   variable for the maximally reserved jobserver slots and exec a
>   child, to release the slots on exit.
> 
> - create Documentation/scripts/parallel-wrapper.sh which examines both
>   $PARALLELISM and the detected "-jauto" logic from Documentation/Makefile
>   to decide sphinx's final -j argument.
> 
> - chain these together in Documentation/Makefile
> 
> Suggested-by: Rasmus Villemoes <linux@xxxxxxxxxxxxxxxxxx>
> Link: https://lore.kernel.org/lkml/eb25959a-9ec4-3530-2031-d9d716b40b20@xxxxxxxxxxxxxxxxxx
> Signed-off-by: Kees Cook <keescook@xxxxxxxxxxxx>
> ---
>  Documentation/Makefile                      |  5 +-
>  Documentation/sphinx/parallel-wrapper.sh    | 25 +++++++
>  scripts/{jobserver-count => jobserver-exec} | 73 ++++++++++++---------
>  3 files changed, 68 insertions(+), 35 deletions(-)
>  create mode 100644 Documentation/sphinx/parallel-wrapper.sh
>  rename scripts/{jobserver-count => jobserver-exec} (50%)
>  mode change 100755 => 100644
> 
> diff --git a/Documentation/Makefile b/Documentation/Makefile
> index ce8eb63b523a..30554a2fbdd7 100644
> --- a/Documentation/Makefile
> +++ b/Documentation/Makefile
> @@ -33,8 +33,6 @@ ifeq ($(HAVE_SPHINX),0)
>  
>  else # HAVE_SPHINX
>  
> -export SPHINX_PARALLEL = $(shell perl -e 'open IN,"sphinx-build --version 2>&1 |"; while (<IN>) { if (m/([\d\.]+)/) { print "auto" if ($$1 >= "1.7") } ;} close IN')
> -
>  # User-friendly check for pdflatex and latexmk
>  HAVE_PDFLATEX := $(shell if which $(PDFLATEX) >/dev/null 2>&1; then echo 1; else echo 0; fi)
>  HAVE_LATEXMK := $(shell if which latexmk >/dev/null 2>&1; then echo 1; else echo 0; fi)
> @@ -67,8 +65,9 @@ quiet_cmd_sphinx = SPHINX  $@ --> file://$(abspath $(BUILDDIR)/$3/$4)
>        cmd_sphinx = $(MAKE) BUILDDIR=$(abspath $(BUILDDIR)) $(build)=Documentation/media $2 && \
>  	PYTHONDONTWRITEBYTECODE=1 \
>  	BUILDDIR=$(abspath $(BUILDDIR)) SPHINX_CONF=$(abspath $(srctree)/$(src)/$5/$(SPHINX_CONF)) \
> +	$(PYTHON) $(srctree)/scripts/jobserver-exec \
> +	$(SHELL) $(srctree)/Documentation/sphinx/parallel-wrapper.sh \
>  	$(SPHINXBUILD) \
> -	-j $(shell python $(srctree)/scripts/jobserver-count $(SPHINX_PARALLEL)) \
>  	-b $2 \
>  	-c $(abspath $(srctree)/$(src)) \
>  	-d $(abspath $(BUILDDIR)/.doctrees/$3) \
> diff --git a/Documentation/sphinx/parallel-wrapper.sh b/Documentation/sphinx/parallel-wrapper.sh
> new file mode 100644
> index 000000000000..a416dbfd2025
> --- /dev/null
> +++ b/Documentation/sphinx/parallel-wrapper.sh
> @@ -0,0 +1,25 @@
> +#!/bin/sh
> +# SPDX-License-Identifier: GPL-2.0+
> +#
> +# Figure out if we should follow a specific parallelism from the make
> +# environment (as exported by scripts/jobserver-exec), or fall back to
> +# the "auto" parallelism when "-jN" is not specified at the top-level
> +# "make" invocation.
> +
> +sphinx="$1"
> +shift || true
> +
> +parallel="${PARALLELISM:-1}"
> +if [ ${parallel} -eq 1 ] ; then
> +	auto=$(perl -e 'open IN,"'"$sphinx"' --version 2>&1 |";
> +			while (<IN>) {
> +				if (m/([\d\.]+)/) {
> +					print "auto" if ($1 >= "1.7")
> +				}
> +			}
> +			close IN')
> +	if [ -n "$auto" ] ; then
> +		parallel="$auto"
> +	fi
> +fi
> +exec "$sphinx" "-j$parallel" "$@"

I don't understand this logic. If the parent failed to claim any tokens
(likely because the top make and its descendants are already running 16
gcc processes), just let sphinx run #cpus jobs in parallel? That doesn't
make sense - it gets us back to the "we've now effectively injected K
tokens to the jobserver that weren't there originally".

>From the comment above, what you want is to use "auto" if the top
invocation was simply "make docs". Well, I kind of disagree with falling
back to auto in that case; the user can say "make -j8 docs" and the
wrapper is guaranteed to claim them all. But if you really want, the
jobserver-count script needs to detect and export the "no parallelism
requested at top level" in some way distinct from "PARALLELISM=1",
because that's ambiguous.

> diff --git a/scripts/jobserver-count b/scripts/jobserver-exec
> old mode 100755
> new mode 100644
> similarity index 50%
> rename from scripts/jobserver-count
> rename to scripts/jobserver-exec
> index a68a04ad304f..4593b2a1e36d
> --- a/scripts/jobserver-count
> +++ b/scripts/jobserver-exec
> @@ -2,17 +2,16 @@
>  # SPDX-License-Identifier: GPL-2.0+
>  #
>  # This determines how many parallel tasks "make" is expecting, as it is
> -# not exposed via an special variables.
> +# not exposed via an special variables, reserves them all, runs a subprocess
> +# with PARALLELISM environment variable set, and releases the jobs back again.
> +#
>  # https://www.gnu.org/software/make/manual/html_node/POSIX-Jobserver.html#POSIX-Jobserver
>  from __future__ import print_function
>  import os, sys, fcntl, errno
> -
> -# Default parallelism is "1" unless overridden on the command-line.
> -default="1"
> -if len(sys.argv) > 1:
> -	default=sys.argv[1]
> +import subprocess
>  
>  # Extract and prepare jobserver file descriptors from envirnoment.
> +jobs = b""
>  try:
>  	# Fetch the make environment options.
>  	flags = os.environ['MAKEFLAGS']
> @@ -30,31 +29,41 @@ try:
>  	reader = os.open("/proc/self/fd/%d" % (reader), os.O_RDONLY)
>  	flags = fcntl.fcntl(reader, fcntl.F_GETFL)
>  	fcntl.fcntl(reader, fcntl.F_SETFL, flags | os.O_NONBLOCK)
> -except (KeyError, IndexError, ValueError, IOError, OSError) as e:
> -	print(e, file=sys.stderr)
> +
> +	# Read out as many jobserver slots as possible.
> +	while True:
> +		try:
> +			slot = os.read(reader, 1)
> +			jobs += slot

I'd just try to slurp in 8 or 16 tokens at a time, there's no reason to
limit to 1 in each loop.

> +		except (OSError, IOError) as e:
> +			if e.errno == errno.EWOULDBLOCK:
> +				# Stop at the end of the jobserver queue.
> +				break
> +			# If something went wrong, give back the jobs.
> +			if len(jobs):
> +				os.write(writer, jobs)
> +			raise e
> +except (KeyError, IndexError, ValueError, OSError, IOError) as e:
>  	# Any missing environment strings or bad fds should result in just
> -	# using the default specified parallelism.
> -	print(default)
> -	sys.exit(0)
> +	# not being parallel.
> +	pass
>  
> -# Read out as many jobserver slots as possible.
> -jobs = b""
> -while True:
> -	try:
> -		slot = os.read(reader, 1)
> -		jobs += slot
> -	except (OSError, IOError) as e:
> -		if e.errno == errno.EWOULDBLOCK:
> -			# Stop when reach the end of the jobserver queue.
> -			break
> -		raise e
> -# Return all the reserved slots.
> -os.write(writer, jobs)
> -
> -# If the jobserver was (impossibly) full or communication failed, use default.
> -if len(jobs) < 1:
> -	print(default)
> -	sys.exit(0)
> -
> -# Report available slots (with a bump for our caller's reserveration).
> -print(len(jobs) + 1)
> +claim = len(jobs)
> +if claim < 1:
> +	# If the jobserver was (impossibly) full or communication failed
> +	# in some way do not use parallelism.
> +	claim = 0

Eh, "claim < 1" is the same as "claim == 0", right? So this doesn't seem
to do much. But what seems to be missing is that after you write back
the tokens in the error case above (os.write(writer, jobs)), jobs is not
set back to the empty string. That needs to be done either there or in
the outer exception handler (where you just have a "pass" currently).

> +# Launch command with a bump for our caller's reserveration,
> +# since we're just going to sit here blocked on our child.
> +claim += 1
> +
> +os.unsetenv('MAKEFLAGS')
> +os.environ['PARALLELISM'] = '%d' % (claim)
> +rc = subprocess.call(sys.argv[1:])
> +
> +# Return all the actually reserved slots.
> +if len(jobs):
> +	os.write(writer, jobs)
> +
> +sys.exit(rc)

What happens if the child dies from a signal? Will this correctly
forward that information?

Similarly (and the harder problem), what happens when our parent wants
to send its child a signal to say "stop what you're doing, return the
tokens, brush your teeth and go to bed". We should forward that signal
to the real job instead of just dying, losing track of both the tokens
we've claimed as well as orphaning the child.

Rasmus



[Index of Archives]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]

  Powered by Linux