Re: Decompression speed: zip vs lzo

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 11 Jan 2008, Pierre Habouzit wrote:

> Okay the numbers are still not that impressive, but my patch doesn't
> touches _only_ deltas, but also log comments I said, so I've redone my
> tests with git log and *TADAAAA*:
> 
> vanilla git:
>     repeat 5 time git log >|/dev/null
>     git log >| /dev/null  2,54s user 0,12s system 99% cpu 2,660 total
>     git log >| /dev/null  2,52s user 0,12s system 99% cpu 2,653 total
>     git log >| /dev/null  2,57s user 0,07s system 99% cpu 2,637 total
>     git log >| /dev/null  2,56s user 0,09s system 99% cpu 2,659 total
>     git log >| /dev/null  2,54s user 0,10s system 99% cpu 2,660 total
> 
> with the 512 octets limit:
> 
>     $ repeat 5 time git log >|/dev/null
>     git log >| /dev/null  2,10s user 0,10s system 99% cpu 2,193 total
>     git log >| /dev/null  2,08s user 0,10s system 99% cpu 2,189 total
>     git log >| /dev/null  2,06s user 0,11s system 100% cpu 2,162 total
>     git log >| /dev/null  2,04s user 0,13s system 100% cpu 2,172 total
>     git log >| /dev/null  2,06s user 0,13s system 99% cpu 2,198 total
> 
>     That's already a 20% time reduction.

Well, sorry but that doesn't count to me.  The whole 'git log' taking 
around 2 seconds is already hell fast for what it does, and IMHO this is 
not worth increasing the repository storage size for this particular 
work load.

> with the 1024 octets limits:
>     $ repeat 5 time git log >|/dev/null
>     git log >| /dev/null  1,39s user 0,12s system 99% cpu 1,512 total
>     git log >| /dev/null  1,38s user 0,12s system 100% cpu 1,498 total
>     git log >| /dev/null  1,41s user 0,10s system 99% cpu 1,514 total
>     git log >| /dev/null  1,41s user 0,10s system 100% cpu 1,506 total
>     git log >| /dev/null  1,40s user 0,10s system 100% cpu 1,504 total
> 
>     Yes that's 43% time reduction !

If that was 43% reduction of a 10 second operation then sure I would 
agree, like the blame operation typically is.  But otherwise the 
significant storage size increase is not worth the reduction of less 
than a second in absolute time.

>   As a side note, repacking with the 1024 octets limits takes 4:06 here,
> and 4:26 without the limit at all, which is 8% less time. I know it
> doesn't matters a lot as repack is a once time operation, but still, it
> would speed up git gc --auto which is not something to neglect
> completely.

No, I doubt it would.  The bulk of 'git gc --auto' will reuse existing 
pack data which is way different from 'git repack -f'. 

> I say it's worth investigating a _lot_,

Well, I was initially entousiastic about this avenue, but the speed 
performance difference is far from impressive IMHO, given the tradeoff.


Nicolas
-
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux