x86-64 Linux.
Errors are happening during normal run time, and seem to wait at
least 4-5hours and sometimes as long as a week. No where near the
data store limits as I've got 100Gb configured, and the cache_dir is
under 100Mb.
Beyond that, I'm not sure I could leave it running for a bit and see
if I could pull more info, if I know where to look.
On Oct 4, 2006, at 4:26 PM, Adrian Chadd wrote:
On Wed, Oct 04, 2006, Mike Garfias wrote:
yes to diskd.
I didn't see any other box relating to squid3 with the same problem.
Is there any workaround, or should I use ufs or aufs instead?
Which platform?
If you're running FreeBSD 6 then you'll be fine running aufs as
long as you don't
also run kqueue. That particular kernel crash should be fixed in
the FreeBSD-6.2
release.
Anything else? Run Aufs. Diskd has some well-understood limitations
which
none of us have had the time to sit down and fix.
(But hey, if someone's interested in fixing diskd, let me or Henrik
know.
We'll be able to help you understand the problem and hopefully help
everyone
find a fix. :)
As for that specific bug? I'm not sure. The diskd IO is done in
seperate
processes and so "current filedescriptor" counts in cachemanager
don't take
them into account.
Can you tell me whether the errors are happening during normal
running of
squid, or just after startup? Can you check the 'general
information' page
in cachemgr and tell us how full your disk store is? Is it "over"-
full?
The UNLNK tells me that squid is busy removing files and it only
does that
if (a) the disk store is close to full or overfilled and its madly
running
the replacement policy to delete objects, or (b) its in the process of
rebuilding the object index after startup and its deleting all the
expired
files it found in the indexes.
Adrian
On Oct 4, 2006, at 3:34 PM, Adrian Chadd wrote:
I think this is already in Bugzilla.
Is this squid-3 running with diskd?
adrian
On Wed, Oct 04, 2006, Mike Garfias wrote:
example of error logs:
24707 UNLNK id 0 /data/squid3/00/8A/00008A70: unlink: No such
file or
directory
24707 /data/squid3/00/04/0000043D: open: Too many open files
24707 READ id 44968: do_read: Bad file descriptor
24707 CLOSE id 44968: do_close: Bad file descriptor
24707 /data/squid3/00/8A/00008A71: open: Too many open files
24707 WRITE id 44969: do_write: Bad file descriptor
24707 WRITE id 44969: do_write: Bad file descriptor
24707 CLOSE id 44969: do_close: Bad file descriptor
2006/10/04 10:28:20| storeSwapOutFileClosed: dirno 0, swapfile
00008A71, errflag=-1
(42) No message of desired type
24707 UNLNK id 0 /data/squid3/00/8A/00008A71: unlink: No such
file or
directory
Everything I've read indicates that the open file limit is too low.
But (from the cachemgr):
File descriptor usage for squid:
Maximum number of file descriptors: 32768
Largest file desc currently in use: 17
Number of file desc currently in use: 13
Files queued for open: 0
Available number of file descriptors: 32755
Reserved number of file descriptors: 100
Store Disk files open: 0
Any ideas what is going on with this?