On Mon, Nov 9, 2015 at 9:55 AM, Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> wrote: > > So stop with the "online_cpus()" stuff. And don't base your benchmarks > purely on the CPU-bound case. Because the CPU-bound case is the case > that is already generally so good that few people will care all *that* > deeply. > > Many of the things git does are not for "best-case" behavior, but to > avoid bad "worst-case" situations. Look at things like the index > preloading (also threaded). The big win there is - again - when the > stat() calls may need IO. Sure, it can help for CPU use too, but > especially on Linux, cached "stat()" calls are really quite cheap. The > big upside is, again, in situations like git repositories over NFS. > > In the CPU-intensive case, the threading might make things go from a > couple of seconds to half a second. Big deal. You're not getting up to > get a coffee in either case. Chiming in here as I have another series in flight doing parallelism. (Submodules done in parallel including fetching, cloning, checking out) online_cpus() seems to be one of the easiest ballpark estimates for the power of a system. So what I would have liked to use would be some kind of parallel_expect_bottleneck(enum kinds); with kinds being one of (FS, NETWORK, CPU, MEMORY?) to get an estimate 'good' number to use. -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html