Hi Ævar, On Thu, Nov 19, 2020 at 10:01:07AM +0100, Ævar Arnfjörð Bjarmason wrote: > > > The major downside is that detecting the file system type is quite > > platform-dependent, so there is no simple and portable solution. (Also, > > I'm not sure if the optimal number of workers would be the same on > > different OSes). But we decided to give it a try, so this is a > > rough prototype that would work for Linux: > > https://github.com/matheustavares/git/commit/2e2c787e2a1742fed8c35dba185b7cd208603de9 > > I'm not intrinsically opposed to hardcoding some "nr_threads = is_nfs() > ? x : y" as a stopgap. > > I do think we should be thinking about a sustainable way of doing this > sort of thing, this method of testing once and hardcoding something > isn't a good approach. > > It doesn't anticipate all sorts of different setups, e.g. in this case > NFS is not a FS, but a protocol, there's probably going to be some > implementations where parallel is much worse due to a quirk of the > implementation. > > I think integrating an optimization run with the relatively new > git-maintenance is a better way forward. > > You'd configure e.g.: > > maintenance.performanceTests.enabled=true > maintenance.performanceTests.writeConfig=true > > Which would run e.g.: > > git config --type bool core.untrackedCache $(git update-index --test-untracked-cache && echo true || echo false) > git config checkout.workers $(git maintenance--helper auto-discover-config checkout.workers) > > Such an implementation can be really basic at first, or even just punt > on the test and use your current "is it NFS?" check. > > But I think we should be moving to some helper that does the actual test > locally when asked/configured by the user, so we're not making a bunch > of guesses in advance about the size/shape of the repository, OS/nfs/fs > etc. I like this idea as something that will give the best configuration for a given repository. I also know from working with customers for a long time that most users will use the default settings for almost any application, and that default configurations therefore matter a lot. The ideal experience would be, in my view, that a clone or checkout would automatically benefit from parallel checkout, even if this is the first checkout into a new repository. Maybe both ideas could be combined? We could have some reasonable heuristic based on file system type (and maybe number of CPUs) that gives most of the benefits of paralell checkout, while still being a reasonable compromise that that works across different NFS servers and file systems. Power users that want more aggressive tuning could run the maintenance command that measures file system performance and comes up with an optimal value for checkout.workers. Regards, Geert