Re: Decompression speed: zip vs lzo

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Johannes Schindelin wrote:
> No new object type.  Why should it?  But it has to have a config variable 
> which says what type of packs/loose objects it has (and you will not be 
> able to mix them).

I meant loose object.  However this is configured, it affects things
like HTTP push/pull.  Configuring like that would be a bit too fragile
for my tastes.

>> Not really worth it IMHO - gzip is already fast enough on even the most 
>> modern processor these days.
> 
> I agree that gzip is already fast enough.
> 
> However, pack v4 had more goodies than just being faster; it also promised 
> to have smaller packs.  And pack v4 would need to have the same 
> infrastructure of repacking if the client does not understand v4 packs.

Ineed - I think it would be a lot easier to implement if it didn't
bother with loose objects.  It can just be a new pack version with more
compression formats.  For when you know you're going to be doing a lot
of analysis you'd already run "git-repack -a -f" to shorten the deltas,
so this might be a useful option for some - but again I'd want to see
figures first.

I do really like LZOP as far as compression algorithms go.  It seems a
lot faster for not a huge loss in ratio.

Sam.
-
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux