Search squid archive

RE: NTLM auth popup boxes && Solaris 8 tuning for upgrade into 2.7.4

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>
>hello all,
>
>I currently get some sun v210 boxes running solaris 8 and squid-2.6.12
>and samba 3.0.20b I will upgrade these proxies into 2.7.4/3.0.32 next
>monday but before doing this I would like to ask you your advices
and/or
>experiences with tuning these kind of boxes.
>
>the service is running well today except we regularly get
authentication
>popup boxes. This is really exasperating our Users. I already spent lot
>of times on the net in the hope finding a clear explanation about it
but
>i am still searching. I already configured starting 128 ntlm_auth
>processes on each of my servers. This gives better results but problem
>still remains. I also made some patching in my new package I will
deploy
>next week by overwrting some samba values .. below my little patch ..
>
>--- samba-3.0.32.orig/source/include/local.h    2008-08-25
>23:09:21.000000000 +0200
>+++ samba-3.0.32/source/include/local.h 2008-10-09 13:09:59.784144000
>+0200
>@@ -222,7 +222,7 @@
> #define WINBIND_SERVER_MUTEX_WAIT_TIME ((
>((NUM_CLI_AUTH_CONNECT_RETRIES) * ((CLI_AUTH_TIMEOUT)/1000)) + 5)*2)
>
> /* Max number of simultaneous winbindd socket connections. */
>-#define WINBINDD_MAX_SIMULTANEOUS_CLIENTS 200
>+#define WINBINDD_MAX_SIMULTANEOUS_CLIENTS 1024
>
> /* Buffer size to use when printing backtraces */
> #define BACKTRACE_STACK_SIZE 64
>
>I currently do not use 'auth_param ntlm keep_alive on' because I do not
>know if it will not cause some side effects for web browser used in our
>company (ie/windows xp sp2).
>
>I already use some parameters today like these ones below ...
>
>set shmsys:shminfo_shmseg=16
>set shmsys:shminfo_shmmni=32
>set shmsys:shminfo_shmmax=2097152
>set msgsys:msginfo_msgmni=40
>set msgsys:msginfo_msgmax=2048
>set msgsys:msginfo_msgmnb=8192
>set msgsys:msginfo_msgssz=64
>set msgsys:msginfo_msgtql=2048
>set rlim_fd_max=8192
>
>arp_cleanup_interval=60000
>ip_forward_directed_broadcasts=0
>ip_forward_src_routed=0
>ip6_forward_src_routed=0
>ip_ignore_redirect=1
>ip6_ignore_redirect=1
>ip_ire_flush_interval=60000
>ip_ire_arp_interval=60000
>ip_respond_to_address_mask_broadcast=0
>ip_respond_to_echo_broadcast=0
>ip6_respond_to_echo_multicast=0
>ip_respond_to_timestamp=0
>ip_respond_to_timestamp_broadcast=0
>ip_send_redirects=0
>ip6_send_redirects=0
>ip_strict_dst_multihoming=1
>ip6_strict_dst_multihoming=1
>ip_def_ttl=255
>tcp_conn_req_max_q0=4096
>tcp_conn_req_max_q=1024
>tcp_rev_src_routes=0
>tcp_extra_priv_ports_add="6112"
>udp_extra_priv_ports_add=""
>tcp_smallest_anon_port=32768
>tcp_largest_anon_port=65535
>udp_smallest_anon_port=32768
>udp_largest_anon_port=65535
>tcp_smallest_nonpriv_port=1024
>udp_smallest_nonpriv_port=1024
>
>after some investigations on my servers, I notice we often get lots of
>connections in status CLOSE_WAIT and FIN_WAIT_2. I also get lots of
>connections in status ESTABLISHED. If I have a look on squid statistics
>these are some files giving an idea on the load handled by our machines
>..
>
>SUNW,Sun-Fire-V210
>2048 Memory size
>bge0 100-fdx (or) 1000-fdx
>client_http.requests = 242/sec
>server.http.requests = 163/sec
>Number of clients accessing cache: 1486
>cpu_usage = 45.065136%
>/dev/dsk/c0t0d0s5    20655529 15015444 5433530  74%  /var/cache0
>/dev/dsk/c0t1d0s5    20655529 14971972 5477002  74%  /var/cache1
>1746418 Store Entries
>(some) 1265 ESTABLISHED tcp connections (at high load)
>(some) 132 CLOSE_WAIT (or)  FIN_WAIT_2 connections
>
>so these servers are relatively heavy loaded and this is the reason why
>I think I still can tune some tcp/udp values in order to optimize and
>reduce the cpu usage on my servers. I already found some ideas on the
>net like these values below but this is not guraranteed ..
>
>ndd -set /dev/tcp tcp_time_wait_interval 60000
>ndd -set /dev/tcp tcp_fin_wait_2_flush_interval 67500
>ndd -set /dev/tcp tcp_keepalive_interval 15000
>
>many thks to help me because we are really in trouble and I am sure we
>can solve these little problems by setting/tuning some parameters.

I made some further investigations and found maybe some relevant issues
..

* first of all, seems the tcp queues are not large enough with some
173201 dropped connections

  # netstat -sP tcp | fgrep -i listendrop
        tcpListenDrop       =173201     tcpListenDropQ0     =     0

* seems we do not get any connection problems with our servers and l2
switches ... only 280 input errors on 583 days uptime.

  # netstat -i
  Name  Mtu  Net/Dest      Address        Ipkts     Ierrs       Opkts
Oerrs Collis Queue
  lo0   8232 loopback      localhost      251726967     0   251726967
0     0      0
  bge0  1500 sbepskcv      sbepskcv       1607581016  280  1645158342
0     0      0
  bge1  1500 sbepskcv-bge1 sbepskcv-bge1  292025        0     3355944
0     0      0

* seems we can optimize a bit tcp time-to-live connections because I see
hundreds connections in status 
  CLOSE_WAIT FIN_WAIT_2 TIME_WAIT

* this is a command I see on the net but to be honnest I do not
understand the output of such a command

  # netstat -k inode_cache
  inode_cache:
  size 157855 maxsize 128252 hits 573916370 misses 401386663 kmem allocs
2786376 kmem frees 2626536
  maxsize reached 165359 puts at frontlist 286490557 puts at backlist
199176533
  queues to free 121260006 scans 1052691213 thread idles 301600489
lookup idles 0 vget idles 0
  cache allocs 401386663 cache frees 404731519 pushes at close 0

* regarding these elements, I think to implement next values on my
proxies ...

  ndd -set /dev/tcp tcp_fin_wait_2_flush_interval 67500
  ndd -set /dev/tcp tcp_conn_req_max_q 8192
  ndd -set /dev/tcp tcp_conn_req_max_q 8192
  ndd -set /dev/tcp tcp_smallest_anon_port 1024
  ndd -set /dev/tcp tcp_slow_start_initial 2
  ndd -set /dev/tcp tcp_xmit_hiwat 65536
  ndd -set /dev/tcp tcp_recv_hiwat 65536
  ndd -set /dev/tcp tcp_time_wait_interval 60000

  I also see some optimizations of keep_alive interval but I do not get
any idea the value I can set for it ... maybe this

  ndd -set /dev/tcp tcp_keepalive_interval 300000

* and last but not least, I saw some recommandations for updating ncsize
parameter to some 8192. To be honnest I am a bit 
  surprised when I see the result of this command on my machine ..

  # mdb -k
  Loading modules: [ unix krtld genunix ip usba random ptm ipc nfs ]
  > ncsize/D
  ncsize:
  ncsize:         128252
  >

  Why do we propose upgrading this to 8192 if I already get it to
128.000 without any value overwrite ??

* nothing found for ntlm popup boxes, so I keep with my upgrade proposal
..

So, sorry to insist but I really would appreciate your comments and
experience on it .. many thanks.

>
>vincent.
-----------------------------------------------------------------
ATTENTION:
The information in this electronic mail message is private and
confidential, and only intended for the addressee. Should you
receive this message by mistake, you are hereby notified that
any disclosure, reproduction, distribution or use of this
message is strictly prohibited. Please inform the sender by
reply transmission and delete the message without copying or
opening it.

Messages and attachments are scanned for all viruses known.
If this message contains password-protected attachments, the
files have NOT been scanned for viruses by the ING mail domain.
Always scan attachments before opening them.
-----------------------------------------------------------------




[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux