On Thu, Jun 15, 2017 at 01:33:34PM +0200, René Scharfe wrote: > > The difference is that in the "before" case, we actually opened each > > directory and ran getdents(). But after gc, the directories are gone > > totally and open() fails. We also have to do a linear walk through the > > objects in each directory, since the contents are sorted. > > Do you mean "unsorted"? Er yeah, sorry, I meant to say "aren't". > > I'm not really sure how, though, short of caching the directory > > contents. That opens up questions of whether and when to invalidate the > > cache. If the cache were _just_ about short hashes, it might be OK to > > just assume that it remains valid through the length of the program (so > > worst case, a simultaneous write might mean that we generate a sha1 > > which just became ambiguous, but that's generally going to be racy > > anyway). > > > > The other downside of course is that we'd spend RAM on it. We could > > bound the size of the cache, I suppose. > > You mean like an in-memory pack index for loose objects? In order to > avoid the readdir call in sha1_name.c::find_short_object_filename()? > We'd only need to keep the hashes of found objects. An oid_array > would be quite compact. Yes, that's what I was thinking of. > Non-racy writes inside a process should not be ignored (write, read > later) -- e.g. checkout does something like that. Ideally, yes. Though thinking on this more, it's racy even today, because we rely on the in-memory packed_git list. We usually re-check the directory only when we look for an object and it's missing. So any new packs which have been added while the process runs will be missed when doing short-sha1 lookups (and actually, find_short_packed_object does not even seem to do the usual retry-on-miss). A normal process like "checkout" isn't writing new packs, but a simultaneous repack could be migrating its new objects to a pack behind its back. (It also _can_ write packs, but only for large blobs). Given that we are talking about finding "unique" abbreviations here, and that those abbreviations can become invalidated immediately anyway as more objects are added (even by the same process), I'm not sure we need to strive for absolute accuracy. > Can we trust object directory time stamps for cache invalidation? I think it would work on POSIX-ish systems, since loose object changes always involve new files, not modifications of existing ones. I don't know if there are systems that don't update directory mtimes even for newly added files. That would still be a stat() per request. I'd be curious how the timing compares to the opendir/readdir that happens now. -Peff