Hey Yuri,
There are couple sides and side-effects to the issue you describe.
If it's OK with you I will for a sec look aside squid and the helpers
code to another issue in Computer Science.
Let say we are running some server\software which it's purpose is to
calculate the distance from point X to Y for a user.(A real world
existing software which was written for DOS)
Since this piece of software size is about 50kB the user never
experienced any issues with this particular software.
The issues came in when the user needed to run some complex calculations
on a big list of points from a map.
From what I have seen in the past a 133Mhz CPU should take the load,
and indeed did not had any issues working with a big set of points what
so ever was thrown at it.
The main complexity with this piece of software was when the
calculations needed to be more complex and turned from X to Y to "from X
to Y considering A B C D E F G H I".
Then the CPU got hogged a bit and there for the calculation took a bit
longer then expected by the user.
Just as a side note this DOS system didn't had any SWAP at all.
This software is still in use since about 198x until these days with no
SWAP at all while the calculations are being made daily by lots of users
around my country.
From Computer Science point of view this piece of software is one of
the most efficient ever existed on earth and while it was written for
DOS it is still efficient enough to support many other systems around
the world.
Wouldn't any developer want to write a code in this level?? I believe
and know that any developer strives to write a software that will use
the available system resources efficiently.
But for any developer there is a point which he sees how his knowledge
alone might not be enough to make a simple piece of software run as
smooth and as efficient as this simple DOS software.
I have never asked the creator of this DOS program how much efforts it
took from him and his companions to write such a great piece of ART
which is one of a kind, it was not because he is dead or something
similar but since I understand it's a complex task which I am not trying
to mimic.
And back to squidguard, I myself have tried to understand why it uses so
much ram and other resources which should have not being occupied at all.
Eventually I have written my own version which is a mimic for squidguard
basic filtering logic.
I have seen this behavior you do see and I have couple of times tried to
at least make something better then it is now.
The task was not simple but my idea after lots of time of thinking was
to separate most of the logic outside of the url_rewrite interface.
I would not just try to solve the issue as "60 processes Absolutely
idle", this might not be the issue in hands at all!
This conclusion can divert you from the real issue in this situation.
If you are up to the task to share(publicly is not a must) a top
snapshot of the machine at midnight while you see the issue I will
gladly be more then happy to try and understand the issue in hands.
I am pretty sure that the command "top -n1 -b" should work in any unix
system I have seen until today.
Do you have access to this squid machine cache manager interface?
Eliezer
On 12/02/2015 20:01, Yuri Voinov wrote:
Hi gents,
subj.
And, of course - question. How to do that? I've don't seen this, if it
exists.
For example, for this config stub:
url_rewrite_program /usr/local/bin/squidGuard -c
/usr/local/squidGuard/squidGuard.conf
url_rewrite_children 100 startup=0 idle=1 concurrency=0
After daily activity, at midnight, still remain near 60 processes.
Absolutely idle.
So, why?
WBR, Yuri
_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users