Re: git-fetching from a big repository is slow

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Johannes Schindelin <Johannes.Schindelin@xxxxxx> wrote:
> On Thu, 14 Dec 2006, Shawn Pearce wrote:
> > Geert Bosch <bosch@xxxxxxxxxxx> wrote:
> > >    if (compressed_size > expanded_size / 4 * 3 + 1024) {
> > >      /* don't try to deltify if blob doesn't compress well */
> > >      return ...;
> > >    }
> > 
> > And yet I get good delta compression on a number of ZIP formatted files 
> > which don't get good additional zlib compression (<3%). Doing the above 
> > would cause those packfiles to explode to about 10x their current size.
> 
> A pity. Geert's proposition sounded good to me.
> 
> However, there's got to be a way to cut short the search for a delta 
> base/deltification when a certain (maybe even configurable) amount of time 
> has been spent on it.

I'm not sure time is the best rule there.

Maybe if the object is large (e.g. over 512 KiB or some configured
limit) and did not compress well when we last deflated it
(e.g. Geert's rule above) then only try to delta it against another
object whose hinted filename is very close/exactly matches and
whose size is very close, and don't make nearly as many attempts
on the matching hunks within any two files if the file appears to
be binary and not text.

I'm OK with a small increase in packfile size as a result of slightly
less optimal delta base selection on the really large binary files
due to something like the above, but 10x is insane.

-- 
Shawn.
-
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]