On Tue, Jan 22, 2019 at 5:26 AM Duy Nguyen <pclouds@xxxxxxxxx> wrote: > > On Tue, Jan 22, 2019 at 2:28 PM Jeff King <peff@xxxxxxxx> wrote: > > > > On Mon, Jan 21, 2019 at 05:02:33PM +0700, Duy Nguyen wrote: > > > > > > As I mentioned in the prior thread I think that it will be simpler > > > > to simply use the existing lock in packing_data instead of moving > > > > read_mutex. I can go back to simply moving read_mutex to the > > > > packing_data struct if that that is preferable, though. > > > > > > In early iterations of these changes, I think we hit high contention > > > when sharing the mutex [1]. I don't know if we will hit the same > > > performance problem again with this patch. It would be great if Elijah > > > with his zillion core machine could test this out. Otherwise it may be > > > just safer to keep the two mutexes separate. > > > > > > [1] http://public-inbox.org/git/20180720052829.GA3852@xxxxxxxxxxxxxxxxxxxxx/ > > > > I haven't been following this thread closely, but I still have access to > > a 40-core machine if you'd like me to time anything. > > > > It sounds like _this_ patch is the more fine-grained one. Is the more > > coarse-grained one already written? > > A more fine-grained one would be 'master' where we use two separate > mutexes for different code. I guess if repack performance with this > patch is still the same as 'master', we're good to go. You may need to > lower $GIT_TEST_OE_SIZE and $GIT_TEST_OE_DELTA_SIZE to force more lock > contention. I do have a patch prepared which simply moves read_mutex to the packing_data struct instead (and renames it read_lock for consistency with the exiting mutex named "lock"), but I wanted to wait for the testing regarding lock contention first. I'm prepared either way it goes. -Patrick