Re: [RFC] Optimize diff-delta.c

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2007-05-01 16:05:24, Nicolas Pitre wrote:
> > On Tue, 1 May 2007, Martin Koegler wrote:
> Right.  I think it would be a good idea to extend the delta format as 
> well to allow for larger offsets in pack v4.

Is git://repo.or.cz/git/fastimport.git#sp/pack4 the current version of
pack v4 efforts?

> > The delta index has approximately the same size in memory as the
> > uncompressed blob ((blob size)/16*(sizeof(index_entry)).
> 
> One thing that could be done with really large blobs is to create a 
> sparser index, i.e. have a larger step than 16.  Because the delta match 
> loop scans backward after a match the sparse index shouldn't affect 
> compression that much on large blobs and the index could be 
> significantly smaller.

In the long term, I think, that the delta generation code needs to get
tunable.

I would add a init_delta function for reading the configuration
options. In create_delta and create_delta_index, different delta
heuristics can be selected based on the blob size and the
configuration options. As patch_delta is not affected by this, it
should be easy to integrate new stragegies.

> > I tried to speed up the delta generation by searching for a common 
> > prefix, as my blobs are mostly append only. I tested it with about 
> > less than 1000 big blobs. The time for finding the deltas decreased 
> > from 17 to 14 minutes cpu time.
> 
> I'm surprised that your patch makes so much of a difference.  Normally 
> the first window should always match in the case you're trying to 
> optimize and the current code should already perform more or less the 
> same as your common prefix match does.

A block is limited to 64k. If the file has some hundred MBs, it has to
match many blocks.

My patch can process everything except the few last thousand lines by
doing a memcmp.

Additionally, nearly every line starts with the same, longer than 16
byte prefix. So its likely, that many blocks map to the same hash
value.

> Ah, no, actually what your patch does is a pessimisation of the matching 
> code by not considering other and possibly better matches elsewhere in 
> the reference buffer whenever there is a match at the beginning of both 
> buffers.  I don't think this is a good idea in general.

For small files, I agree with you.

> What you should try instead if you want to make the process faster is to 
> lower the treshold used to consider a match sufficiently large to stop 
> searching.  That has the potential for even faster processing as the 
> "optimization" would then be effective throughout the buffer and not 
> only at the beginning.
> 
[...]
> 
> You could experiment with that value to determine the best speed vs size 
> compromize.

I will do some experiments.

mfg Martin Kögler
PS: Sorry for break threading, as you did not CC me.
-
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]