About Summer of code idea -- better big-file support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi, there

I am wondering start implementing the idea about better big-file support
during summer of code.
Here are my idea: often gigantic files is media file, such as rmvb, swf,
pdf, dll, etc. And those file themselves are already being compressed, as we
all know, git uses zlib to compress all the object in repository, and it is
the common sense that if we use zlib to compress those file, the compress
ratio will be unbelievable low, from my test if we compress 521MiB mkv file
it will be 520MiB after compressed, how ridiculous it is!
Also I test that if we use "git hash-object" to calculate SHA-1, and copy
all content to repository, it will costs 1 minutes, once we use "git
hash-object -w" it will costs 2 minutes. So we sacrificed a lot, but gained
a little.
So, how do you guys think about that we can test the file type, and then
decide to use zlib or not?
Cheer
Di
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]