Re: [BUG] add_again() off-by-one error in custom format

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jun 14, 2017 at 08:24:25PM +0200, René Scharfe wrote:

> > I think the real question is how likely people use more than one
> > occurrence of the same thing in their custom format, and how deeply
> > they care that --format='%h %h' costs more than --format='%h'.  The
> > cost won't of course be double (because the main traversal costs
> > without any output), but it would be rather unreasonable to expect
> > that --format='%h %h %h %h %h' to cost the same as --format='%h';
> > after all, Git is doing more for them ;-)
> 
> The answer to the first half is obviously "very likely" -- otherwise
> this bug wouldn't have been found, right? :)
> 
> Regarding the question of how bad a 50% slowdown for a second %h
> would be: No idea.  If ran interactively it may not even be noticeable
> because the user can read the first few lines in less while the rest
> is prepared in the background.  We don't have a perf test for formats
> with duplicate short hashes, so we don't promise anything, right? :)

One interesting thing is that the cost of finding short hashes very much
depends on your loose object setup. I timed:

  git log --format=%H >/dev/null

versus

  git log --format=%h >/dev/null

on git.git. It went from about 400ms to about 800ms. But then I noticed
I had a lot of loose object directories, and ran "git gc --prune=now".
Afterwards, my timings were more like 380ms and 460ms.

The difference is that in the "before" case, we actually opened each
directory and ran getdents(). But after gc, the directories are gone
totally and open() fails. We also have to do a linear walk through the
objects in each directory, since the contents are sorted.

So I wonder if it is worth trying to optimize the short-sha1 computation
in the first place. Double-%h aside, that would make _everything_
faster, including --oneline.

I'm not really sure how, though, short of caching the directory
contents. That opens up questions of whether and when to invalidate the
cache. If the cache were _just_ about short hashes, it might be OK to
just assume that it remains valid through the length of the program (so
worst case, a simultaneous write might mean that we generate a sha1
which just became ambiguous, but that's generally going to be racy
anyway).

The other downside of course is that we'd spend RAM on it. We could
bound the size of the cache, I suppose.

-Peff



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]