Search squid archive

Re: NTLM Authenticator Statistics 3.3.5

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hey Kris,

Well it's not such a small setup after all.
I do not know what is the size of these machines but I assume they have
more then just one single core to work fine.
I am not sure about the next suggestion yet since I do not know if the
proxy is for cache also or just plain authentication.
I can assume that these machines can be configured for SMP or
mulch-instances on the same machine.
since you do have the F5 in place adding another so called "instance" of
squid is only a matter of adding another lan IP to the squid machine and
the IP to the F5.
it can balance the traffic in the process level a bit more then you are
might be doing now.
it's not rocket science since lots of information is missing.

A small question:
The mentioned problem is in the period of these 10 days and the service
is just reviving itself each time?? like in the logs?
The network distance between the clients and the DATACENTER since it's
critical for smooth operation..
Notice that each authentication takes up some traffic so a keep_alive is
better to be used to lower the network load of it.

Let say the server is getting 200 requests in one peak of load it means
200 incoming FD then 200 stdin\out operations 200 new connections
towards the auth server\service, about 200 new outgoing connections in
the case of a non cached object..
You can imagine what is the load on the servers if there is 3k requests
per minute..

Eliezer

On 09/30/2013 08:23 AM, Kris Glynn wrote:
> Hi Eliezer,
> 
> I am using 60 because it seemed to me that I needed that many. I am actually running 4 x squid 3.3.5 - two in each data center. They are distributed by a browser PAC file and each of the two in each data center are load balanced by a Bigip F5 Load balancer. The PAC file points at the 2 x F5 Vips.
> 
> As for keepalive, no reason that it is off, I will turn it on and see how it goes. Also, Kerberos isn't far off, it's implemented and tested running through the F5 load balancer so I just have to enable it. My Test environment is running squid 3.3.9 and Kerberos works well.
> 
> Each of the 4 proxies have been up for 10days without a restart and averages around..
> 
> 3000 request/per min (/usr/bin/squidclient -p 8080 mgr:info | grep "HTTP requests per minute")
> 3500 clients accessing cache (/usr/bin/squidclient -p 8080 mgr:info | grep "Number of clients accessing cache")
> 2500 open files (/usr/bin/squidclient -p 8080 mgr:info | grep "Number of file desc currently in use")
> 600 usernames in NTLM username cache (/usr/bin/squidclient mgr:username_cache |grep AUTH | wc -l)
> 
> -----Original Message-----
> From: Eliezer Croitoru [mailto:eliezer@xxxxxxxxxxxx]
> Sent: Monday, 30 September 2013 2:40 PM
> To: squid-users@xxxxxxxxxxxxxxx
> Subject: Re:  NTLM Authenticator Statistics 3.3.5
> 
> Hey Kris,
> 
> I am just wondering why do you nedd 60 children at all??
> I am not sure what is the reason for what you are seeing but you need to make sure that all squid instances are off.
> If you can test it and shutdown the squid instance and all subprocess that are forked.. and then on a clean startup see the cache.log..
> it will give more info.
> I would ask "why do not use keep_alive??" it is there for a reason..
> if it's such a loaded system I would upper the startup from 15 to 30 and the idle to 15.. and would try to use keep_alive on.
> 
> if you want to make sure about the ntlm_auth I would say that you can add a debug flag but it will probably will flood the logs..
> 
> A kerberous migration is possible??
> since it's a 2.5 compatible I assume it's not that simple?
> 
> Eliezer
> 
> On 09/30/2013 07:07 AM, Kris Glynn wrote:
>> Hi,
>>
>> I've noticed after a while the number of /usr/bin/ntlm_auth processes in "shutting down state" tends to increase and never actually shutdown/decrease.
>>
>> It is configured like so..
>>
>> auth_param ntlm program /usr/bin/ntlm_auth
>> --helper-protocol=squid-2.5-ntlmssp
>> auth_param ntlm children 60 startup=15 idle=10 auth_param ntlm
>> keep_alive off
>>
>>  I've found an occurrence where a squid -k rotate was performed
>> (performed daily via cron) and helperOpenServers tried to start
>> processes and logged the below. When I logged into the squid server
>> many many old ntlm_auth processes were running (over 140+ in shutting
>> down state)
>>
>> Is it normal for a squid -k rotate to spawn helpers? Should I be scheduling a squid restart to occur every x days and perhaps killall -9 ntlm_auth at the same time or does anyone have any suggestions as to why /usr/bin/ntlm_auth processes with Flags "RS" increase over time when not restarting squid?
>>
>> 2013/09/24 00:00:23 kid1| storeDirWriteCleanLogs: Starting...
>> 2013/09/24 00:00:28 kid1|     65536 entries written so far.
>> 2013/09/24 00:00:35 kid1|    131072 entries written so far.
>> 2013/09/24 00:00:40 kid1|    196608 entries written so far.
>> 2013/09/24 00:00:45 kid1|    262144 entries written so far.
>> 2013/09/24 00:00:48 kid1|    327680 entries written so far.
>> 2013/09/24 00:00:51 kid1|    393216 entries written so far.
>> 2013/09/24 00:00:55 kid1|    458752 entries written so far.
>> 2013/09/24 00:00:59 kid1|    524288 entries written so far.
>> 2013/09/24 00:01:02 kid1|    589824 entries written so far.
>> 2013/09/24 00:01:05 kid1|    655360 entries written so far.
>> 2013/09/24 00:01:07 kid1|    720896 entries written so far.
>> 2013/09/24 00:01:08 kid1|   Finished.  Wrote 759594 entries.
>> 2013/09/24 00:01:08 kid1|   Took 44.19 seconds (17189.28 entries/sec).
>> 2013/09/24 00:01:08 kid1| logfileRotate:
>> stdio://var/log/squid/access.log
>> 2013/09/24 00:01:08 kid1| Rotate log file
>> stdio://var/log/squid/access.log
>> 2013/09/24 00:01:08 kid1| helperOpenServers: Starting 10/60
>> 'ntlm_auth' processes
>> 2013/09/24 00:01:08 kid1| ipcCreate: fork: (12) Cannot allocate memory
>> 2013/09/24 00:01:08 kid1| WARNING: Cannot run '/usr/bin/ntlm_auth' process.
>> 2013/09/24 00:01:08 kid1| ipcCreate: fork: (12) Cannot allocate memory
>> 2013/09/24 00:01:08 kid1| WARNING: Cannot run '/usr/bin/ntlm_auth' process.
>> 2013/09/24 00:01:08 kid1| ipcCreate: fork: (12) Cannot allocate memory
>> 2013/09/24 00:01:08 kid1| WARNING: Cannot run '/usr/bin/ntlm_auth' process.
>> 2013/09/24 00:01:08 kid1| ipcCreate: fork: (12) Cannot allocate memory
>> 2013/09/24 00:01:08 kid1| WARNING: Cannot run '/usr/bin/ntlm_auth' process.
>> 2013/09/24 00:01:08 kid1| ipcCreate: fork: (12) Cannot allocate memory
>> 2013/09/24 00:01:08 kid1| WARNING: Cannot run '/usr/bin/ntlm_auth' process.
>> 2013/09/24 00:01:08 kid1| ipcCreate: fork: (12) Cannot allocate memory
>> 2013/09/24 00:01:08 kid1| WARNING: Cannot run '/usr/bin/ntlm_auth' process.
>> 2013/09/24 00:01:08 kid1| ipcCreate: fork: (12) Cannot allocate memory
>> 2013/09/24 00:01:08 kid1| WARNING: Cannot run '/usr/bin/ntlm_auth' process.
>> 2013/09/24 00:01:08 kid1| ipcCreate: fork: (12) Cannot allocate memory
>> 2013/09/24 00:01:08 kid1| WARNING: Cannot run '/usr/bin/ntlm_auth' process.
>> 2013/09/24 00:01:08 kid1| ipcCreate: fork: (12) Cannot allocate memory
>> 2013/09/24 00:01:08 kid1| WARNING: Cannot run '/usr/bin/ntlm_auth' process.
>> 2013/09/24 00:01:08 kid1| ipcCreate: fork: (12) Cannot allocate memory
>> 2013/09/24 00:01:08 kid1| WARNING: Cannot run '/usr/bin/ntlm_auth' process.
>> 2013/09/24 00:01:08 kid1| helperOpenServers: Starting 1/10 'ntlm_auth'
>> processes
>> 2013/09/24 00:01:08 kid1| ipcCreate: fork: (12) Cannot allocate memory
>> 2013/09/24 00:01:08 kid1| WARNING: Cannot run '/usr/bin/ntlm_auth' process.
>> 2013/09/24 00:01:08 kid1| helperOpenServers: Starting 1/50
>> 'ext_wbinfo_group_acl' processes
>> 2013/09/24 00:01:08 kid1| ipcCreate: fork: (12) Cannot allocate memory
>>
>> mgr:ntlmauthenticator
>>
>> NTLM Authenticator Statistics:
>> program: /usr/bin/ntlm_auth
>> number active: 40 of 60 (77 shutting down) requests sent: 9021339
>> replies received: 9021339 queue length: 0 avg service time: 0 msec
>>
>>
>> Below is output from mgr:info at the same time the above mgr:ntlmauthenticator was run..
>>
>> Squid Object Cache: Version 3.3.5
>> Start Time:     Wed, 18 Sep 2013 04:48:06 GMT
>> Current Time:   Mon, 30 Sep 2013 03:50:02 GMT
>> Connection information for squid:
>>         Number of clients accessing cache:      3540
>>         Number of HTTP requests received:       47586765
>>         Number of ICP messages received:        0
>>         Number of ICP messages sent:    0
>>         Number of queued ICP replies:   0
>>         Number of HTCP messages received:       0
>>         Number of HTCP messages sent:   0
>>         Request failure ratio:   0.00
>>         Average HTTP requests per minute since start:   2763.2
>>         Average ICP messages per minute since start:    0.0
>>         Select loop called: 1816815750 times, 0.569 ms avg Cache
>> information for squid:
>>         Hits as % of all requests:      5min: 13.2%, 60min: 17.0%
>>         Hits as % of bytes sent:        5min: 5.2%, 60min: 10.5%
>>         Memory hits as % of hit requests:       5min: 19.3%, 60min: 21.6%
>>         Disk hits as % of hit requests: 5min: 15.3%, 60min: 12.4%
>>         Storage Swap size:      18430644 KB
>>         Storage Swap capacity:  90.0% used, 10.0% free
>>         Storage Mem size:       2028096 KB
>>         Storage Mem capacity:   99.0% used,  1.0% free
>>         Mean Object Size:       24.57 KB
>>         Requests given to unlinkd:      0
>> Median Service Times (seconds)  5 min    60 min:
>>         HTTP Requests (All):   0.01235  0.00919
>>         Cache Misses:          0.08265  0.14252
>>         Cache Hits:            0.00179  0.00091
>>         Near Hits:             0.01745  0.01469
>>         Not-Modified Replies:  0.00000  0.00000
>>         DNS Lookups:           0.00464  0.00464
>>         ICP Queries:           0.00000  0.00000
>> Resource usage for squid:
>>         UP Time:        1033315.041 seconds
>>         CPU Time:       78172.769 seconds
>>         CPU Usage:      7.57%
>>         CPU Usage, 5 minute avg:        19.42%
>>         CPU Usage, 60 minute avg:       18.82%
>>         Process Data Segment Size via sbrk(): 2977284 KB
>>         Maximum Resident Size: 11641712 KB
>>         Page faults with physical i/o: 28016 Memory usage for squid
>> via mallinfo():
>>         Total space in arena:  -1216888 KB
>>         Ordinary blocks:       -1249480 KB 117072 blks
>>         Small blocks:               0 KB      0 blks
>>         Holding blocks:         14172 KB     11 blks
>>         Free Small blocks:          0 KB
>>         Free Ordinary blocks:   32592 KB
>>         Total in use:           32592 KB -3%
>>         Total free:             32592 KB -3%
>>         Total size:            -1202716 KB
>> Memory accounted for:
>>         Total accounted:       -1548505 KB 129%
>>         memPool accounted:     2645799 KB -220%
>>         memPool unaccounted:   -3848515 KB  -0%
>>         memPoolAlloc calls: 9094446713
>>         memPoolFree calls:  9173333887 File descriptor usage for
>> squid:
>>         Maximum number of file descriptors:   16384
>>         Largest file desc currently in use:   3031
>>         Number of file desc currently in use: 2602
>>         Files queued for open:                   0
>>         Available number of file descriptors: 13782
>>         Reserved number of file descriptors:   100
>>         Store Disk files open:                   0
>> Internal Data Structures:
>>         751840 StoreEntries
>>         133109 StoreEntries with MemObjects
>>         133085 Hot Object Cache Items
>>         749978 on-disk objects
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> The content of this e-mail, including any attachments, is a
>> confidential communication between Virgin Australia Airlines Pty Ltd
>> (Virgin Australia) or its related entities (or the sender if this
>> email is a private communication) and the intended addressee and is
>> for the sole use of that intended addressee. If you are not the
>> intended addressee, any use, interference with, disclosure or copying
>> of this material is unauthorized and prohibited. If you have received
>> this e-mail in error please contact the sender immediately and then
>> delete the message and any attachment(s). There is no warranty that
>> this email is error, virus or defect free. This email is also subject
>> to copyright. No part of it should be reproduced, adapted or
>> communicated without the written consent of the copyright owner. If
>> this is a private communication it does not represent the views of
>> Virgin Australia or its related entities. Please be aware that the
>> contents of any emails sent to or from Virgin Austra
>  lia or it
> s related entities may be periodically monitored and reviewed. Virgin Australia and its related entities respect your privacy. Our privacy policy can be accessed from our website: www.virginaustralia.com
>>
> 
> The content of this e-mail, including any attachments, is a confidential communication between Virgin Australia Airlines Pty Ltd (Virgin Australia) or its related entities (or the sender if this email is a private communication) and the intended addressee and is for the sole use of that intended addressee. If you are not the intended addressee, any use, interference with, disclosure or copying of this material is unauthorized and prohibited. If you have received this e-mail in error please contact the sender immediately and then delete the message and any attachment(s). There is no warranty that this email is error, virus or defect free. This email is also subject to copyright. No part of it should be reproduced, adapted or communicated without the written consent of the copyright owner. If this is a private communication it does not represent the views of Virgin Australia or its related entities. Please be aware that the contents of any emails sent to or from Virgin Austra
 lia or it
s related entities may be periodically monitored and reviewed. Virgin Australia and its related entities respect your privacy. Our privacy policy can be accessed from our website: www.virginaustralia.com
> 





[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux