Search squid archive

Squid core

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello, I have a problem with Squid. Every day, over 0:00h (when logs are
rotated) my squid generates a core file that almost full all
'/var/spool/squid' partition. Looking at squid-cache logs, here is what it's
shown:

2008/05/27 00:06:12| helperOpenServers: Starting 40 'squirm' processes
2008/05/27 00:06:12| ipcCreate: fork: (12) Cannot allocate memory
2008/05/27 00:06:12| WARNING: Cannot run '/usr/local/squirm/bin/squirm' process.
FATAL: Too many queued url_rewriter requests (1 on 0)
squidaio_queue_request: WARNING - Queue congestion


Hardware has 1GB RAM with 512MB reserved for cache_mem, only squid launched at
service on machine. Using squirm as url rewriter and launching 40 childs.
Until now, the only way to solve it is erasing core file and restarting Squid.
Any ideas what is this happens and how to solve it? Thanks.

Squid version: 2.6 STABLE6 (RedHat 5 official package)





[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux