Re: Compression speed for large files

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Linus Torvalds wrote:
> 
> On Tue, 4 Jul 2006, Joachim Berdal Haga wrote:
>> Here's a test with "time gzip -[169] -c file >/dev/null". Random data
>> from /dev/urandom, kernel headers are concatenation of *.h in kernel
>> sources. All times in seconds, on my puny home computer (1GHz Via Nehemiah)
> 
> That "Via Nehemiah" is probably a big part of it.
> 
> I think the VIA Nehemiah just has a 64kB L2 cache, and I bet performance 
> plummets if the tables end up being used past that. 

Not really. The numbers in my original post were from a Intel core-duo,
they were: 158/18/6 s for comparable (but larger) data.

And on a P4 1.8GHz with 512kB L2, the same 23MB data file compresses in
28.1/5.9/1.3 s. That's a factor 22 slowest/fastest; the VIA was only
factor 18, so the difference is actually *larger*.

-j.
-
: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]