Re: [PATCH v3 18/23] Makefiles: add and use wildcard "mkdir -p" template

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Nov 17, 2021 at 12:52:11PM +0100, Ævar Arnfjörð Bjarmason wrote:
> 
> On Wed, Nov 17 2021, Mike Hommey wrote:
> 
> > On Wed, Nov 17, 2021 at 10:26:27AM +0100, Ævar Arnfjörð Bjarmason wrote:
> >> 
> >> On Wed, Nov 17 2021, Mike Hommey wrote:
> >> 
> >> > On Tue, Nov 16, 2021 at 01:00:18PM +0100, Ævar Arnfjörð Bjarmason wrote:
> >> >> Add a template to do the "mkdir -p" of $(@D) (the parent dir of $@)
> >> >> for us, and use it for the "make lint-docs" targets I added in
> >> >> 8650c6298c1 (doc lint: make "lint-docs" non-.PHONY, 2021-10-15).
> >> >> 
> >> >> As seen in 4c64fb5aad9 (Documentation/Makefile: fix lint-docs mkdir
> >> >> dependency, 2021-10-26) maintaining these manual lists of parent
> >> >> directory dependencies is fragile, in addition to being obviously
> >> >> verbose.
> >> >> 
> >> >> I used this pattern at the time because I couldn't find another method
> >> >> than "order-only" prerequisites to avoid doing a "mkdir -p $(@D)" for
> >> >> every file being created, which as noted in [1] would be significantly
> >> >> slower.
> >> >> 
> >> >> But as it turns out we can use this neat trick of only doing a "mkdir
> >> >> -p" if the $(wildcard) macro tells us the path doesn't exist. A re-run
> >> >> of a performance test similar to thatnoted downthread of [1] in [2]
> >> >> shows that this is faster, in addition to being less verbose and more
> >> >> reliable (this uses my "git-hyperfine" thin wrapper for "hyperfine"[3]):
> >> >> 
> >> >>     $ git hyperfine -L rev HEAD~0,HEAD~1 -b 'make -C Documentation lint-docs' -p 'rm -rf Documentation/.build' 'make -C Documentation lint-docs'
> >> >>     Benchmark 1: make -C Documentation lint-docs' in 'HEAD~0
> >> >>       Time (mean ± σ):      2.129 s ±  0.011 s    [User: 1.840 s, System: 0.321 s]
> >> >>       Range (min … max):    2.121 s …  2.158 s    10 runs
> >> >> 
> >> >>     Benchmark 2: make -C Documentation lint-docs' in 'HEAD~1
> >> >>       Time (mean ± σ):      2.659 s ±  0.002 s    [User: 2.306 s, System: 0.397 s]
> >> >>       Range (min … max):    2.657 s …  2.662 s    10 runs
> >> >> 
> >> >>     Summary
> >> >>       'make -C Documentation lint-docs' in 'HEAD~0' ran
> >> >>         1.25 ± 0.01 times faster than 'make -C Documentation lint-docs' in 'HEAD~1'
> >> >> 
> >> >> So let's use that pattern both for the "lint-docs" target, and a few
> >> >> miscellaneous other targets.
> >> >> 
> >> >> This method of creating parent directories is explicitly racy in that
> >> >> we don't know if we're going to say always create a "foo" followed by
> >> >> a "foo/bar" under parallelism, or skip the "foo" because we created
> >> >> "foo/bar" first. In this case it doesn't matter for anything except
> >> >> that we aren't guaranteed to get the same number of rules firing when
> >> >> running make in parallel.
> >> >
> >> > Something else that is racy is that $(wildcard) might be saying the
> >> > directory doesn't exist while there's another make subprocess that has
> >> > already started spawning `mkdir -p` for that directory.
> >> > That doesn't make a huge difference, but you can probably still end up
> >> > with multiple `mkdir -p` runs for the same directory.
> >> >
> >> > I think something like the following could work while avoiding those
> >> > races:
> >> >
> >> > define create_parent_dir_RULE
> >> > $(1): | $(dir $(1)).
> >> > ALL_DIRS += $(dir $(1))
> >> > endef
> >> >
> >> > define create_parent_dir_TARGET
> >> > $(1)/.: $(dir $(1)).
> >> > 	echo mkdir $$(@D)
> >
> > erf, s/echo //
> >
> >> > endef
> >> >
> >> > $(eval $(call create_parent_dir_RULE, first/path/file))
> >> > $(eval $(call create_parent_dir_RULE, second/path/file))
> >> > # ...
> >> >
> >> > $(foreach dir,$(sort $(ALL_DIRS)),$(eval $(call create_parent_dir_TARGET,$(dir:%/=%))))
> >> 
> >> I think the "race" just isn't a problem, and makes managing this much
> >> simpler.
> >> 
> >> I.e. we already rely on "mkdir -p" not failing on an existing directory,
> >> so the case where we redundantly try to create a directory that just got
> >> created by a concurrent process is OK, and as the quoted benchmark shows
> >> is much faster than a similar (but not quite the same as) a
> >> dependency-based implementaiton.
> >> 
> >> I haven't implemented your solution, but it seems to be inherently more
> >> complex.
> >> 
> >> I.e. with the one I've got you just stick the "mkdir if needed"
> >> one-liner in each rule, with yours you'll need to accumulate things in
> >> ALL_DIRS, and have some foreach somewhere or dependency relationship to
> >> create those beforehand if they're nested, no?
> >
> > For each rule, it would also be a oneliner to add above the rule. The rest
> > would be a prelude and a an epilogue to stick somewhere in the Makefile.
> 
> How would that epilogue know to handle cases where we're running "clean"
> or whatever thing doesn't want to create the full set of directories
> we've accumulated in ALL_DIRS while parsing the Makefile?

The epilogue only adds rules like:

dir/subdir/.: dir/.
	mkdir $(@D)

As long as those "clean" or whatever rules don't depend on those,
nothing will happen.



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux