On Fri, May 24, 2019 at 6:55 AM Duy Nguyen <pclouds@xxxxxxxxx> wrote: > > On Thu, May 23, 2019 at 11:51 PM Matheus Tavares Bernardino > <matheus.bernardino@xxxxxx> wrote: > > > > > Hi, everyone > > > > As one of my first tasks in GSoC, I'm looking to protect the global > > states at sha1-file.c for future parallelizations. Currently, I'm > > analyzing how to deal with the cached_objects array, which is a small > > set of in-memory objects that read_object_file() is able to return > > although they don't really exist on disk. The only current user of > > this set is git-blame, which adds a fake commit containing > > non-committed changes. > > > > As it is now, if we start parallelizing blame, cached_objects won't be > > a problem since it is written to only once, at the beginning, and read > > from a couple times latter, with no possible race conditions. > > > > But should we make these operations thread safe for future uses that > > could involve potential parallel writes and reads too? > > > > If so, we have two options: > > - Make the array thread local, which would oblige us to replicate data, or > > - Protect it with locks, which could impact the sequential > > performance. We could have a macro here, to skip looking on > > single-threaded use cases. But we don't know, a priori, the number of > > threads that would want to use the pack access code. > > > > Any thought on this? > > I would go with "that's the problem of the future me". I'll go with a > simple global (I mean per-object store) mutex. Thanks for the help, Duy. What you mean by "per-object store mutex" is to have a lock for every "struct raw_object_store" in the "struct repository"? Maybe I didn't quite understand what the "object store" is, yet. > After we have a > complete picture how many locks we need, and can run some tests to see > the amount of lock contention we have (or even cache missess if we > have so many locks), then we can start thinking of an optimal > strategy. Please correct me if I misunderstand your suggestion. The idea is to protect the pack access code at a higher level, measure contentions, and then start refining the locks, if needed? I'm asking because I was going directly to the lower level protections (or thread-safe conversions) and planning to build it up. For example, I was working this week to eliminate static variables inside pack access functions. Do you think this approach is OK or should I work on a more "broader" thread-safe conversion first (like a couple wide mutex) and refine it down? > I mean, this is an implementation detail and can't affect object > access API right? That gives us some breathing room to change stuff > without preparing for something that we don't need right now (like > multiple cached_objects writers) Indeed, makes sense to leave the multiple writers support to the future, if it's ever needed. Thanks.