On Wed, 25 May 2011 21:11:45 -0700, Tory M Blue wrote:
On Wed, May 25, 2011 at 9:03 PM, Amos Jeffries <squid3@xxxxxxxxxxxxx>
wrote:
On Wed, 25 May 2011 20:27:05 -0700, Tory M Blue wrote:
On Wed, May 25, 2011 at 8:01 PM, Amos Jeffries
<squid3@xxxxxxxxxxxxx>
wrote:
backup, so I was leary. CPU cycles sure, but the squid process
shows:
PID USER Â Â ÂPR ÂNI ÂVIRT ÂRES ÂSHR S %CPU %MEM Â ÂTIME+ ÂCOMMAND
30766 squid   20  0 6284m 6.1g 3068 S 13.9 38.8 Â91:51.50 squid
Hold up a minute. This diagram worries me. Squid-2 should not have
any
{squid} entries. Just helpers with their own names.
the diagram was from ptrace.
The processes running that can be deemed via ps are
root 2334 1 0 May19 ? 00:00:00 squid -f
/etc/squid/squid.conf
squid 2336 2334 11 May19 ? 17:54:41 (squid) -f
/etc/squid/squid.conf
squid 2338 2336 0 May19 ? 00:00:00 (unlinkd)
Are they helpers running with the process name of "squid" instead of
their
own binary names like unlinkd?
Or are they old squid which are still there after something like a
crash?
No crashes, controlled stop and starts or -k reconfigure. Nothing old
Phew.
l1/l2 cache? Have not considered or looked into it. New concept for
me :)
Sorry terminology mixup.
L1 L2 values on the cache_dir line. Sub-directories within the dir
structure.
The URL hash is mapped to a 32-bit binary value which then gets
split into
FS path: path-root/L1/L2/filename
Âcache_dir type path-root size L1 L2
Ahh yes, actually was running at 16 256
and recently moved it to 8 128 trying "again" to mitigate files.
So did I move this in the wrong direction?
Um, yes. 64 256 would probably be a better step. Neither affects the
total count, just the spread within the filesystem.
Tuned to larger values to decrease file count in each directory. To
avoid
iowait on ext-like FS while the disk scans inodes for a particular
filename
in the L2 directory.
thanks again Amos
Tory