On Thu, May 06, 2021 at 10:00:49PM -0700, Elijah Newren wrote: > > > + >directory-random-file.txt && > > > + # Put this file under directory400/directory399/.../directory1/ > > > + depth=400 && > > > + for x in $(test_seq 1 $depth); do > > > + mkdir "tmpdirectory$x" && > > > + mv directory* "tmpdirectory$x" && > > > + mv "tmpdirectory$x" "directory$x" > > > + done && > > > > Is this expensive/slow loop needed because you'd otherwise run afoul > > of command-line length limits on some platforms if you tried creating > > the entire mess of directories with a single `mkdir -p`? > > The whole point is creating a path long enough that it runs afoul of > limits, yes. > > If we had an alternative way to check whether dir.c actually recursed > into a directory, then I could dispense with this and just have a > single directory (and it could be named a single character long for > that matter too), but I don't know of a good way to do that. (Some > possiibilities I considered along that route are mentioned at > https://lore.kernel.org/git/CABPp-BF3e+MWQAGb6ER7d5jqjcV=kYqQ2stM_oDyaqvonPPPSw@xxxxxxxxxxxxxx/) I don't have a better way of checking the dir.c behavior. But I think the other half of Eric's question was: why can't we do this setup way more efficiently with "mkdir -p"? I'd be suspicious that it would work portably because of the long path. But I think the perl I showed earlier would create it in much less time: $ touch directory-file $ time sh -c ' for x in $(seq 1 400) do mkdir tmpdirectory$x && mv directory* tmpdirectory$x && mv tmpdirectory$x directory$x done ' real 0m2.222s user 0m1.481s sys 0m0.816s $ time perl -e ' for (reverse 1..400) { my $d = "directory$_"; mkdir($d) and chdir($d) or die "mkdir($d): $!"; } open(my $fh, ">", "some-file"); ' real 0m0.010s user 0m0.001s sys 0m0.009s -Peff