Re: [PATCH 16/16] write_sha1_file: freshen existing objects

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Oct 03, 2014 at 02:29:58PM -0700, Junio C Hamano wrote:

> > We can solve this by "freshening" objects that we avoid
> > writing by updating their mtime. The algorithm for doing so
> > is essentially the same as that of has_sha1_file. Therefore
> > we provide a new (static) interface "check_and_freshen",
> > which finds and optionally freshens the object. It's trivial
> > to implement freshening and simple checking by tweaking a
> > single parameter.
> 
> An old referent by a recent unreachable may be in pack.  Is it
> expected that the same pack will have many similar old objects (in
> other words, is it worth trying to optimize check-and-freshen by
> bypassing access() and utime(), perhaps by keeping a "freshened in
> this process already" flag in struct packed_git)?

Thanks for reminding me. I considered something like that early on and
then completely forgot to revisit it. I do not have numbers either way
on whether it is an optimization worth doing. On the one hand, it is
very easy to do.  On the other, it probably does not make a big
difference; we are literally skipping the write of an entire object, and
have just run a complete sha1 over the contents. A single utime() call
probably is not a big deal.

> Could check-and-freshen-nonlocal() ever be called with freshen set
> to true?  Should it be?  In other words, should we be mucking with
> objects in other people's repositories with utime()?

Yes, it can, and I think the answer to "should" is "yes" for safety,
though I agree it feels a little hacky. I did explicitly write it so
that we fail-safe when freshening doesn't work. That is, if we try to
freshen an object that is in an alternate and we cannot (e.g., because
we don't have write access), we'll fallback to writing out a new loose
object locally.

That's very much the safest thing to do, but obviously it performs less
well. Again, this is the code path where we _would have_ written out the
object anyway, so it might not be that bad. But I don't know to what
degree the current code relies on that optimization for reasonable
performance. E.g., if you clone from a read-only alternate and then try
to `git write-tree` immediately on the index, will we literally make a
full copy of each tree object?

Hmm, that should be easy to test...

  $ su - nobody
  $ git clone -s ~peff/compile/linux /tmp/foo
  $ cd /tmp/foo

  $ git count-objects
  0 objects, 0 kilobytes
  $ git write-tree
  $ git count-objects
  0 objects, 0 kilobytes

So far so good. Let's blow away the cache-tree to make sure...

  $ rm .git/index
  $ git read-tree HEAD
  $ git write-tree
  $ git count-objects
  0 objects, 0 kilobytes

So that's promising. But it's far from a proof that there isn't some
other code path that will be negatively impacted.

-Peff
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]