Re: [PATCH] diff-delta: bound hash list length to avoid O(m*n) behavior

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Junio C Hamano <junkio@xxxxxxx> writes:

>> These numbers are misleading.
>>
>> The 36K objects pack I used in my previous tests gives 9971223
>> (from "next" branch) vs 9664546 (this patch) final pack size.
>> The wallclock time on my machine is 1m35 vs 3m30.  I doubt many
>> people are willing to pay 100% more waiting time for 3% tighter
>> pack.

	I started to suspect that the above benchmark was
unreliable, because I do not know how the original linux-2.6
repository I used for my testing were packed, and I was letting
the delta reusing to do its thing, so the times were probably
off (the 3-byte finer grained deltifier would have spent a lot
more time during the initial packing) and the result size were
too (the finer grained one would have packed things a lot more
tightly if the delta it was allowed to reuse was made by itself,
not by the old one).

	So I tried a fresh packing without reusing delta at all,
with three variants: the original 16-byte one (master), 3-byte
finer grained one with your hash limit (nico), and 3-byte finer
grained one with my more aggressive hash limit (jc).  The
benchmark is not meant to be scientific, but to see how well it
serves the kernel community ;-).


	The first round.  The set of objects packed were from
today's Linus tip (everything down to epoch v2.6.12-rc2), 193309
objects in total, on my Duron 750 with slow disks.

	real		user		sys		bytes		savings
master	11m17.121s	10m23.280s	0m47.290s       109045599	N/A
nico	25m37.058s	23m0.770s       2m20.460s	104401392	4.25%
jc	24m12.072s	21m45.120s	2m16.400s	104409761	4.25%

My hack does not change things much in the overall picture
compared to yours, although it does cut down runtime by about
5%.  We can immediately see that finer grained ones are
significantly more expensive than the original loose one either
way.


	The next round of test is to place the "full pack"
generated in the previous experiment in .git/objects/pack and
generate packs by reusing delta.  When testing the "master"
version, I move the pack produced above with "master" under
.git/objects/pack, and run this to emulate pack generation for
downloader who has v2.6.14 and wants v2.6.15-rc1 (the same 36K
objects test I did previously):

	git rev-list --objects-edge v2.6.14..v2.6.15-rc1 |
        time git-pack-objects --stdout | wc -c

When switching to test "nico" deltifier implementation, I start
with the full pack generated by "nico" in .git/object/pack to
make comparison fair.  Here is the result:

	real		user		sys		bytes		savings
master	1m37.703s       1m28.970s	0m5.860s        9968386		N/A
nico	3m39.982s	3m24.420s	0m14.380s	9182761		7.88%
jc	3m0.434s	2m35.940s	0m13.730s       9139460		8.31%

In thin transfer, base object is omitted, so the generated pack
has higher delta to non-delta ratio than usual -- and that is
the reason we see much more savings.  Still, we are paying quite
a lot of overhead by finer grained delta code.


	The last test is not to do a thin pack but still reusing
delta.  This is to emulate "git repack" performance.  Again, I
had the matching pack in .git/objects/pack to make the
compararison fair:

	git rev-list --objects v2.6.14..v2.6.15-rc1 |
        time git-pack-objects --stdout | wc -c

And here is the result:

	real		user		sys		bytes		savings
master	1m0.866s	0m57.590s	0m2.810s	34601417	N/A
nico	2m8.059s	2m0.360s	0m6.350s	33888419	2.06%
jc	1m49.894s	1m42.250s	0m6.780s	33838262	2.20%

This one is not getting much savings compared to the full pack
case, but still spending about twice the time.


	In short, I'd love to get a tighter packing, but as it
currently stands, I do not think 5% resulting packsize reduction
warrants making 100% slower operation the default.  And the
experiment I did does not account for the memory pressure the
new delitifier imposes us (two pointers per every byte of source
material -- it could be reduced to one pointer though), so when
we start talking about huge files I am not sure we can manage to
keep the runtime reasonably low.

	It maybe is an option to have a flag to tell "I want a
lot tighter pack made; you can spend all the time while I am
sleeping" and switch between two deltifiers, but otherwise...

	One thing I do not understand is why my patch improves
the compressed result size.  The patch was primarily to brutally
pessimize the selection of "copied from source" to avoid the
corner case performance problems, so I would understand why it
is faster than yours, but I expected it to *always* create a
looser pack.  I am puzzled.

	Swapping window scan order is orthogonal to this, so
maybe we could do it first and make it work, then start reusing
the refindex to see how well things improve.  But we should be
reminded of the recent report that pack-object ran out of space
even with the current code that does not reuse the refindex when
packing 8 huge objects X-<.

-
: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]