Re: Protecting old temporary objects being reused from concurrent "git gc"?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Nov 15, 2016 at 12:33:04PM -0500, Matt McCutchen wrote:

> On Tue, 2016-11-15 at 12:06 -0500, Jeff King wrote:
> >  - when an object write is optimized out because we already have the
> >    object, git will update the mtime on the file (loose object or
> >    packfile) to freshen it
> 
> FWIW, I am not seeing this happen when I do "git read-tree --prefix"
> followed by "git write-tree" using the current master (3ab2281).  See
> the attached test script.

The optimization I'm thinking about is the one from write_sha1_file(),
which learned to freshen in 33d4221c7 (write_sha1_file: freshen existing
objects, 2014-10-15).

I suspect the issue is that read-tree populates the cache-tree index
extension, and then write-tree omits the object write before it even
gets to write_sha1_file(). The solution is that it should probably be
calling one of the freshen() functions (possibly just replacing
has_sha1_file() with check_and_freshen(), but I haven't looked).

I'd definitely welcome patches in this area.

> OK.  I'll write a patch to add a summary of this information to the
> git-gc man page.

Sounds like a good idea. Thanks.

-Peff



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]