Mario Garcia Ortiz wrote:
Hello
thank you very much for your answer. the problem is that squid grows
contantly on size, so far is already at 1.5GB and it has been
restarted monday.
i will try to provoke a a dump core so i can send it to squid.
Squid is supposed to allow growth until the internal limit is reached.
According to those stats only 98% of the internal storage limit is used.
Anything you can provide about build options, configuration settings,
and what the OS thinks the memory usage is will help limit the problem
search down.
in the meanwhile i will upgrade squid to the latest stable 21. is
there any recommended options while compiling on solaris 10?
Options-wise everything builds on Solaris. Actual usage testing has been
a little light so we can't guarantee anything as yet.
Some extra build packages may be needed:
http://wiki.squid-cache.org/KnowledgeBase/Solaris
as using
an alternate malloc library?
If you are able to find and use a malloc library that is known to handle
memory allocation 64-bit systems well it would be good. They can be
rare on some systems.
Amos
2009/12/31 Amos Jeffries <squid3@xxxxxxxxxxxxx>:
Mario Garcia Ortiz wrote:
Hello
thank you very much for your help.
the problem occurred once the process size reached 4Gbytes. the only
application running on the server is the proxy, there are two
instances running each one in a different IP address.
there is no cache.. the squid was compiled with
--enable-storeio=diskd,null and in squid.conf :
cache_dir null /var/spool/squid1
as for the hits i assume there are none since there is no cache am I
wrong?
here is what i get with mgr:info output from squidclient:
Cache information for squid:
Hits as % of all requests: 5min: 11.4%, 60min: 17.7%
Hits as % of bytes sent: 5min: 8.8%, 60min: 10.3%
Memory hits as % of hit requests: 5min: 58.2%, 60min: 60.0%
Disk hits as % of hit requests: 5min: 0.1%, 60min: 0.1%
Storage Swap size: 0 KB
Storage Swap capacity: 0.0% used, 0.0% free
Storage Mem size: 516272 KB
Storage Mem capacity: 98.5% used, 1.5% free
Mean Object Size: 0.00 KB
Requests given to unlinkd: 0
I am not able to find a core file in the system for the problem of
yesterday.
the squid was restarted yesterday at 11.40 am and now the process data
segment size is 940512 KB.
i bet that if i let the process to reach 4GB again the crash will
occur? maybe is this necessary in order to collect debug data?
thank you in advance for your help it is very much appreciated.
kindest regards
Mario G.
You may have hit a malloc problem seen in recent FreeBSD 64-bit.
Check what the OS reports Squid memory usage as, in particular VIRTSZ,
during normal operation and compare to those internal stats Squid keeps.
Amos
2009/12/23 Kinkie <gkinkie@xxxxxxxxx>:
On Wed, Dec 23, 2009 at 3:12 PM, Mario Garcia Ortiz <mariog@xxxxxxx>
wrote:
Hello
i have used all the internet resources available and I still can't
find a definitive solution to this problem.
we have a squid running on a solaris 10 server. everything run
smoothly except that the process size grows constantly and it reaches
4GB yesterday after which the process crashed; this is the output from
the log:
FATAL: xcalloc: Unable to allocate 1 blocks of 4194304 bytes!
[...]
i am eagestly looking forward for your help
It seems like you're being hit by a memory leak, or there are some
serious configuration problems.
How often does this happen, and how much load is there on the system?
(in hits per second or minute, please)
Going 64-bit for squid isn't going to solve things, at most it will
delay the crash but it may cause further problems to the system
stability.
Please see http://wiki.squid-cache.org/SquidFaq/BugReporting for hints
on how to proceed.
--
/kinkie
--
Please be using
Current Stable Squid 2.7.STABLE7 or 3.0.STABLE20
Current Beta Squid 3.1.0.15
--
Please be using
Current Stable Squid 2.7.STABLE7 or 3.0.STABLE20
Current Beta Squid 3.1.0.15