On Wed, Oct 12, 2011 at 3:09 AM, Amos Jeffries <squid3@xxxxxxxxxxxxx> wrote: >> FATAL: storeDirOpenTmpSwapLog: Failed to open swap log. > > So what is taking up all that space? > 2GB+ objects in the cache screwing with the actual size calculation? > logs? > swap.state too big? > core dumps? > other applications? What's puzzling is that there appears to be plenty of free space: squid:/var/cache# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 65G 41G 22G 66% / tmpfs 1.7G 0 1.7G 0% /lib/init/rw udev 10M 652K 9.4M 7% /dev tmpfs 1.7G 0 1.7G 0% /dev/shm Is it possible that the disk runs out of free space, and df just gives me the wrong output? There is no other app on the machine, except Squirm processes, and Sarg. I had Sarg generate reports every 5 minutes for 6 weeks, and it ran fine. Now it runs only every hour, for safety. >> Now Squid is running and serving requests, albeit >> without caching. However, I keep seeing the same error: >> client_side.cc(2977) okToAccept: WARNING! Your cache is running out of >> filedescriptors >> >> What is the reason of this since I'm not using caching at all? > > Cache only uses one FD. Client connection uses one, server connection uses > one. Each helper uses at least one. Your Squid seems to be thinking it only > has 1024 to share between all those connections. Squid can handle this, but > it has to do so by slowing down the incoming traffic a lot and possibly > dropping some client connections. I increased ulimit to 65536 for Squid as suggested by Wilson, and it worked fine (without caching). I re-enabled caching and after a while Squid crashed with the same error "FATAL: storeDirOpenTmpSwapLog: Failed to open swap log." Now I'm back to Squid without caching. The questions I'm asking myself are: 1) Why this issue with FDs happened only after several months? 2) What does take all this space if apparently there's plenty of space on disk? Thanks for your tips. If there are other tests I can try please don't hesitate to post your suggestions. Leonardo