Search squid archive

Re: throughput limitation from cache

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



At 1137138557s since epoch (01/12/06 20:49:17 -0500 UTC), Richard Mittendorfer wrote:
> It's even if I'm the only client and it's one big file that's retrieved,
> so it must be some kind of internal limit. I have to look into the
> source, maybe I can find it hardcoded somewhere. 256kB/s looks so
> artificial ;)

Not too sure about that.  I just downloaded a non-cached file through
our proxy and broke to 270KB/s (this is the busiest time of day for
us, though).  I know I've done better than that when it's quiet.

If I turn around and request the same file again (now that it's
cached), I'm pulling >2.0MB/s without any trouble.

We're on a P3-850MHz with 1.5GB RAM and 30GB SCSI RAID1.  I'm hoping
to upgrade by the end of the month.  ;-)

> Had a look at it. Doesn't look like debian's squid is compiled with
> async-io. ..hmm - <coffee> - sure, debian's is async-io. Must
> be. aufs _is_ compiled in: --enable-storeio=ufs,aufs,diskd,null

Confirmed that it has it.  We're on a stock config of Debian 3.1:

# squid -v
Squid Cache: Version 2.5.STABLE9
configure options:  --enable-async-io --with-pthreads
--enable-storeio=ufs,aufs,diskd,null --enable-linux-netfilter
--enable-arp-acl --enable-removal-policies=lru,heap --enable-snmp
--enable-delay-pools --enable-htcp --enable-poll
--enable-cache-digests --enable-underscores --enable-referer-log
--enable-useragent-log --enable-auth=basic,digest,ntlm --enable-carp
--with-large-files i386-debian-linux

Jason

-- 
Jason Healy
http://www.logn.net/


[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux