Search squid archive

Re: Re: About bottlenecks (Max number of connections, etc.)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 25/02/2013 1:24 p.m., Manuel wrote:
I have noticed that it always start to fail when there are only available
exactly 3276 file descriptors and 13108 file desc in use. That is almost
exactly 20% free file descriptors. Still it look for me that there is a
problem of not enough file descriptors (just because of the with-maxfd=16384
config in the installation of Squid) but I wonder whether is it normal that
it always stick at that number (and not in something much closer to 0
available file descriptors and 16384 file desc in use). If file desc are the
problem I also wonder why I am not getting any error in the logs while in
the past I did get the "Your cache is running out of filedescriptors" error.
Any ideas?

This was the activity in two different moments and even in different servers
(if I am not wrong), as you can see it stuck in the same number:
Server 1:
	Maximum number of file descriptors:   16384
	Largest file desc currently in use:   13125
	Number of file desc currently in use: 13108
	Files queued for open:                   0
	Available number of file descriptors: 3276
	Reserved number of file descriptors:   100
	Store Disk files open:                  73
	IO loop method:                     epoll

Server 2:
	Maximum number of file descriptors:   16384
	Largest file desc currently in use:   13238
	Number of file desc currently in use: 13108
	Files queued for open:                   0
	Available number of file descriptors: 3276
	Reserved number of file descriptors:   100
	Store Disk files open:                 275
	IO loop method:                     epoll

You say that in your slow server you are able to achieve twice req/sec than
in your fastest one but in both cases active connections remain in a max of
around 20k, is it true?

Unknown.

  How many file descriptors do you reach at that
point? 20000?

I don't know sorry. Efforts were concentrated on processing speed profiling rather than socket counts.

Most of the developer efforts on Squid the last few years have been focussed the same way on processing speed and reducing the need for more TCP connections, rather than maximising the connection count. Issues with FD that is not caused by ulimit, SELinux or --with-max-filedscriptors are kind of rare. Thinking a long way back I'm reminded one client had some issues with a TCP stack "optimization" limiting the ephemeral ports and needed to raise them to achieve 64K port usage. It might be related to that, or it might be some limit less than 13108 on disk FD or network FDs. The 13108 is a combined client FDs + server FDs + disk FDs + internal kernel I/O pipe FDs.

I agree that it is pretty strange for one specific number to be cropping up constantly like that.


  Those machines are also different in RAM? How important is the
RAM difference for the performance of Squid? According to the bottlenecks
you said, I wonder whether from 2 GB onwards the rest of the RAM is useless
or not for Squid.

I don't think so. Squid and modern hardware should be capable of many more GB than that. That calculation is just something to keep in mind when running a lot of connections so that you don't limit Squid peak loads by allocating too many GB to other things like cache_mem.

Amos


[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux