Re: Decompression speed: zip vs lzo

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Junio C Hamano wrote:
> Note that the space nor time performance of compressing and
> uncompressing a single huge blob is not as interesting in the
> context of git as compressing/uncompressing millions of small
> pieces whose total size is comparable to the specimen of "huge
> single blob" experiment.  Obviously loose object files are
> compressed individually, and packfile contents are also
> individually and independently compressed.  Set-up cost for
> individual invocation of compression and uncompression on
> smaller data matters a lot more than an experiment on
> compressing and uncompressiong a single huge blob (this applies
> to both time and space).

Yes - and lzo will almost certainly win on all those counts!

I think to go forward this would need a prototype and benchmark figures
for things like "annotate" and "fsck --full" - but bear in mind it would
be a long road to follow-up to completion, as repository compatibility
would need to be a primary concern and this essentially would create a
new pack type AND a new *object* type.  Not only that, but currently
there is no header in the objects on disk which can be used to detect a
gzip vs. an lzop stream.  Not really worth it IMHO - gzip is already
fast enough on even the most modern processor these days.

Sam.
-
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux