The place to start is grabbing traces from a running system to find out where the time is being spent. You'll probably want to start with vmstat and see whether its chewing 100% of one CPU, or whether its blocked waiting for disk IO, or a combination of the two. Its probably CPU - I'd then run oprofile to gather statistics. If you've paid for AS then you've paid for support; please consider contacting Redhat and asking them for assistance. Adrian On Tue, Aug 07, 2007, NGUYEN, KHANH, ATTSI wrote: > Hi, > > I am using squid 2.6 on Linux AS version 4, update 3. > > Hardware: dell 2850, 4 GB memory, 6 x 72 GB disks. NO RAID. Each disk is one mount point. > > Squid basic configuration: > > cache_mem: 2 GB > maximum_object_size 5096 MB > maximum_object_size_in_memory 100 MB > cache_replacement_policy lru > 6 cache_dir on each disk. > Cache serve is configured as a reverse proxy in front of an apache server. > > squid compilation option: enable-follow-x-forwarded-for, enable-async-io, enable-auth, disable-wccp, enable-snmp, enable-x-accelerator-vary, enable-remove-policies=lru > > When I request a 4GB object from the squid server(the object is already cached) vs from an apache server (version 2.2.0), the cpu usage of the squid process is at least 3 times more than the cpu usage of the apache server. This object size exceeds the maximum_object_size_in_memory thus it has to get from the disk each time there is a request for it. So perhaps the squid has some extra overhead. However, 3 times more seems unusual. Any body has any suggestion on tuning the squid or the OS to better serve large object? Also I notice that the cpu takes a lot of hit when the object is greater than 1 MB. > > Thanks, > Khanh -- - Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support - - $25/pm entry-level bandwidth-capped VPSes available in WA -