Hello i have used all the internet resources available and I still can't find a definitive solution to this problem. we have a squid running on a solaris 10 server. everything run smoothly except that the process size grows constantly and it reaches 4GB yesterday after which the process crashed; this is the output from the log: FATAL: xcalloc: Unable to allocate 1 blocks of 4194304 bytes! Squid Cache (Version 3.0.STABLE20): Terminated abnormally. CPU Usage: 91594.216 seconds = 57864.539 user + 33729.677 sys Maximum Resident Size: 0 KB Page faults with physical i/o: 0 Memory usage for squid via mallinfo(): total space in arena: -157909 KB Ordinary blocks: 691840 KB 531392 blks Small blocks: 4460 KB 184700 blks Holding blocks: 50 KB 1847 blks Free Small blocks: 696 KB Free Ordinary blocks: -854957 KB Total in use: 696351 KB -440% Total free: -854260 KB 541% every limit on the solaris has been set to ulimited : ulimit -aH core file size (blocks, -c) unlimited data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited open files (-n) 65536 pipe size (512 bytes, -p) 10 stack size (kbytes, -s) unlimited cpu time (seconds, -t) unlimited max user processes (-u) 16357 virtual memory (kbytes, -v) unlimited at the moment of the problem there was still plenty of swap left. so it's nto a problem of swap. the OS is a 64bit operating system but the squid is not compiled in 64bits, do you think that if that i recompile the squid in 64 bit the problem will be solved or will it be reported for later because there's a a memory leak that the process memory consumption and size grow without bounds. every night there's a rotation of the logs with squi -f squid.conf -k rotate, does this cause the process to become so big? i am eagestly looking forward for your help thank you in advance MarioG