Re: [PATCH v2] Custom compression levels for objects and packs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 8 May 2007, Dana How wrote:

> Since max-pack-size has been out there since April 4 and
> the first acceptable version was May 1 (suggested by 0 comments),
> I didn't realize it was a "questionable series".
> 
> I think it should be straightforward for me to re-submit this
> based on current master.

Since this patch is simpler it could be merged much faster, before the 
pack limit series.

> > > +     /* differing core & pack compression when loose object -> must
> > recompress */
> > > +     if (!entry->in_pack && pack_compression_level !=
> > zlib_compression_level)
> > > +             to_reuse = 0;
> > > +     else
> > I am not sure if that is worth it, as you do not know if the
> > loose object you are looking at were compressed with the current
> > settings.
> You do not know for certain, that is correct.  However, config
> settings setting unequal compression levels signal that you
> care differently about the two cases. (For me,  I want the
> compression investment to correspond to the expected lifetime of the file.)
> Also,  *if* we have the knobs we want in the config file,
> I don't think we're going to be changing these settings all that often.
> 
> If I didn't have this check forcing recompression in the pack,
> then in the absence of deltification each object would enter the pack
> by being copied (in the preceding code block) and pack.compression
> would have little effect.  I actually experienced this the very first
> time I imported a large dataset into git (I was trying to achieve the
> effect of this patch by changing core.compression dynamically,  and
> was a bit mystified for a while by the result).
> 
> Thus,  if core.loosecompression is set to speed up git-add,  I should
> take the time to recompress the object when packing if pack.compression
> is different (of course the hit of not doing so will be lessened by
> deltification
> which forces a new compression).

Right.  And this also depends whether or not you have core.legacyheaders 
set to false or not.

And the whole purpose for setting core.legacyheaders is exactly to allow 
for loose objects to be copied straight into the pack.  This should have 
priority over mismatched compression levels IMHO.

Also, when repacking, delta reuse does not recompress objects for the 
same reason, regardless of the compression level used when they were 
compressed initially.  Same argument goes for delta depth.

So if you really want to ensure a compression level on the whole pack, 
you'll have to use -f with git-repack. Or leave core.legacyheaders 
unset.


Nicolas
-
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux