On 18/09/2016 6:38 a.m., Ahmed Alzaeem wrote: > thanks amos for clarification > > i expanded the kernel to handle high traffic > but still the issue in squid i believe ……. > > is there anything can i do to let squid expand the 1024 sessions form kernel ? It is not a limit on number of "sessions". It is queues inside both the kernel and Squid getting so full of actions that need to happen that small ~50 byte packets travelling between worker processes (running inside the same machine!) take over 6 whole seconds to arrive. For processes on the same machine 'whole seconds' is thousands if not millions of times slower than it should be. > > you can imagine that I’m talking about the # of sockets seems limited to 1024 . > The key word is "seems". To you it seems that way. Reality is that number you are looking at is just a side effect of how fast your CPU is. Faster or slower CPU will have different number show up as the limit - because they will be able to process more or less actions before delays get longer than the timeout. Around about 1024 (or was it 1022? or 1028? or ...) the coordinator process and kernel are doing so much work the worker processes start thinking they have been orphaned and self-terminate. > can i expand it form squid ? > > any squid building compilation settings needed ? > You can edit src/ipc/Strand.cc and change the number parameter of the line: setTimeout(6, "Ipc::Strand::timeoutHandler"); I would advise against making it too much different from the existing value though. Amos _______________________________________________ squid-users mailing list squid-users@xxxxxxxxxxxxxxxxxxxxx http://lists.squid-cache.org/listinfo/squid-users