Hi Michel,
Michel Santos wrote:
Tek Bahadur Limbu disse na ultima mensagem:
how much mem the server has installed?
Most of them have 1 GB memory
well, I believe that is really too low for such a busy machine and you
should think of 4-8 gigs (or more?) for such a server
Well I think you are right. My clients base has been increasing and so
are the number of requests. In fact, the number of connections per squid
proxy has more than doubled. I will perform the memory upgrade as soon
as possible.
what is you kern.maxdsiz value?
It's the default value of 512 MB. I guess I may have to increase it to say
768 MB.
I can put the following value in /boot/loader.conf:
kern.maxdsiz=754974720
you can start here but still also too low, I set this to 4 or 6 gigs but I
have much more ram as you in my servers
Ok let me upgrade my memory before setting it to 2 GB or more.
I will set it to 768 MB for now since I have only 1 GB of memory at the
moment.
How much memory squid is using just before it crashs? is it using swap?
what ipcs tells you then or under load?
Squid could be using somewhere between 500 to 700 MB of memory before it
crashes.
what do you mean? Could, nothing certain? what is your cache_mem setting?
Squid was using approx 600 MB before crashing. The machine was using
almost 95% of it's total memory.
My cache_mem values are between 32 - 64 MB.
It was not using swap.
sure not, if you have 1GB of ram and there are 512Mb left then squid will
crash soon the 512 you allow are used, so no chance to get to swap either
set your maxdsize to 1 or 2 gigs and assist the magic happen
I will do it but let me upgrade my memory first!
Currently, ipcs tells me:
no good, ipcs -a at least
#ipcs -a
Message Queues:
T ID KEY MODE OWNER GROUP CREATOR
CGROUP CBYTES QNUM QBYTES
LSPID LRPID STIME RTIME CTIME
q 65536 1174528 --rwa------ squid squid squid
squid 0 0 16384 1147
1149 23:22:51 23:22:51 17:26:23
q 65537 1174529 --rwa------ squid squid squid
squid 0 0 16384 1149
1147 23:22:51 23:22:51 17:26:23
q 65538 1174532 --rwa------ squid squid squid
squid 0 0 16384 1147
1150 23:22:51 23:22:51 17:26:23
q 65539 1174533 --rwa------ squid squid squid
squid 0 0 16384 1150
1147 23:22:51 23:22:51 17:26:23
Shared Memory:
T ID KEY MODE OWNER GROUP CREATOR
CGROUP NATTCH SEGSZ CPID LPID ATIME
DTIME CTIME
m 65536 1174530 --rw------- squid squid squid
squid 2 380928 1147 1149 17:26:23
17:26:23 17:26:23
m 65537 1174534 --rw------- squid squid squid
squid 2 380928 1147 1150 17:26:23
17:26:23 17:26:23
Semaphores:
T ID KEY MODE OWNER GROUP CREATOR
CGROUP NSEMS OTIME CTIME
Most of them are Dell SC-420 machines:
CPU 2.80GHz (2793.09-MHz K8-class CPU)
Hyperthreading: 2 logical CPUs
OS: FreeBSD-6.0-6.1 (amd64).
6.2 is way better and releng_6 is really stable you could upgrade which
should be possible with no downtime beside one reboot
I also feel that 6.2 is better. I will most probably upgrade them one at
a time in a span of a few weeks.
By the way, do you have some optimal settings which can be applied to
diskd? Below are some values I use:
options SHMSEG=128
options SHMMNI=256
options SHMMAX=50331648 # max shared memory segment size
(bytes)
options SHMALL=16384 # max amount of shared memory (pages)
options MSGMNB=16384 # max # of bytes in a queue
options MSGMNI=48 # number of message queue identifiers
options MSGSEG=768 # number of message segments
options MSGSSZ=64 # size of a message segment
options MSGTQL=4096 # max messages in system
Correct me where necessary.
that does not say so much, better you send what comes from sysctl
kern.ipc
#sysctl kern.ipc
you see? your kernel options are not exactly what you get at runtime right ;)
Hmmm...
You mean set SHMMAXPGS using sysctl or compile it? Also what the best
value for SHMMAXPGS?
yes sysctl, they are runtime tunable
you must check out with ipcs and set your system to what works well
without using too high values
Ok I will set it after studying it carefully.
other values I saw are eventually not so good choices, as somaxconn seems
way to high and nbmclusters are 0 ?
Well I will reduce somaxconn to 8192. The reason why I set nbmclusters
to 0 is because of satellite link delays and high number of tcp
connections, I run out of mbufs. They easily reach between 64000 -
128000 and sometimes even more. Every now and then, I would lose tcp
connections due to the high number of mbufs in use. So I found this
little hack which keeps the number of mbufs utilization at bay.
may be you trust the fbsd auto-tuning and compile your kernel with
max_user 0 and restart without sysctl values but maxdsiz to 1 gb or so and
see what happens.
I am using the default value of 384 for max_user. I will definitely try
tweaking maxdsiz . I am really starting to feel that my busy squid
proxies are starving for memory rather than CPU speed!
By the way, thanks for all your help and suggestions. I appreciate it alot.
I will definitely post back the results after upgrading my memory and
tweaking the necessary sysctl tunables.
Thanking you...
Michel
...
****************************************************
Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.
****************************************************
--
With best regards and good wishes,
Yours sincerely,
Tek Bahadur Limbu
(TAG/TDG Group)
Jwl Systems Department
Worldlink Communications Pvt. Ltd.
Jawalakhel, Nepal
http://www.wlink.com.np