On Mon, May 21, 2018 at 3:29 PM, Amos Jeffries <squid3@xxxxxxxxxxxxx> wrote:
On 22/05/18 00:08, kAja Ziegler wrote:
> Hi,
>
> I want to ask, if it is really needed to use ulimit or
> /etc/security/limits.conf to increase max_filedescriptors value? From my
> testing, it seems not.
Sometimes yes, sometimes no. It depends on what the systems normal
settings are and whether the Squid binary was built with full or partial
rlimit() support.
>
>
> *= my environment:*
>
> CentOS 6.9
> Squid 3.1.23 / 3.4.14
>
> *- default ulimits for root and other users:*
>
> [root@...]# ulimit -Sa | grep -- '-n'
> open files (-n) 1024
> [root@...]# ulimit -Ha | grep -- '-n'
> open files (-n) 4096
>
> *- default ulimits for squid user:*
>
> [root@...]# sudo -u squid /bin/bash
> bash-4.1$ id
> uid=23(squid) gid=23(squid) groups=23(squid),...
> bash-4.1$ ulimit -Sa | grep -- '-n'
> open files (-n) 1024
> bash-4.1$ ulimit -Ha | grep -- '-n'
> open files (-n) 4096
>
> *- processes:*
>
> [root@...]# ps aux | grep squid
> root 7194 0.0 0.1 73524 3492 ? Ss May17 0:00 squid
> -f /etc/squid/squid.conf
> squid 7197 0.2 10.9 276080 210156 ? S May17 4:53 (squid)
> -f /etc/squid/squid.conf
> squid 7198 0.0 0.0 20080 1084 ? S May17 0:00 (unlinkd)
>
> *- error and warning messages from cache.log:*
>
> client_side.cc(3070) okToAccept: WARNING! Your cache is running out of
> filedescriptors
>
> comm_open: socket failure: (24) Too many open files
>
> IpIntercept.cc(137) NetfilterInterception: NF
> getsockopt(SO_ORIGINAL_DST) failed on FD 68: (2) No such file or
> directory ... (many with different FD)
>
These should not be related to FD numbers running out. As you can see FD
68 was already allocated to this TCP connection and the socket accept()'ed.
NAT errors are usually caused by explicit-proxy traffic arriving at a
NAT interception port. Such traffic is prohibited.
or by NAT table overflowing under extreme traffic loads. Either way
current Squid versions will terminate that connection immediately since
it cannot identify where the packets were supposed to be going.
>
> I found many How-tos like these -
> https://access.redhat.com/solutions/63027 and
> https://www.cyberciti.biz/faq/squid-proxy-server-running- .out-filedescriptors/
> Both how-tos mention editing the file /etc/security/limits.conf and
> adding the line "* - nofile 4096" to increase the nofile limit for all
> users except root - I don't like this. According to my test, see below,
> this is not necessary, but I want to be sure, so I'm writing here.
Note that neither of those are the official Squid FAQ.
The official recommendation is to use those data sources to *check* what
the system limits are.
The official Best Practice varies depending on ones needs. Packagers
distributing Squid are advised to set reasonable limits in the init
script starting Squid. End users to use the configuration file best
suited to their need (MAY be limits.conf, but usually squid.conf).
>
> *a) Squid default configuration (max_filedesc 0) and default nofile
> limit (1024/4096):*
>
Do not set the limit to "0". That actually means *no* filedescriptors
for the newer Squid versions.
Remove the directive entirely from your squid.conf for the default
behaviour.
Also "max_filedescriptors" is teh directive name. "max_filedesc" was
only for the experimental RHEL patch many decades ago.
>
> *c) Squid configuration with max_filedesc 8192 and default nofile limit
> (1024/4096):*
>
> [root@...]# ps aux | grep squid
> root 18734 0.0 0.1 73524 3492 ? Ss 14:00 0:00 squid
> -f /etc/squid/squid.conf
> squid 18737 0.3 0.6 80244 11860 ? S 14:00 0:00 (squid)
> -f /etc/squid/squid.conf
> squid 18740 0.0 0.0 20080 1088 ? S 14:00 0:00 (unlinkd)
>
> [root@...]# grep -E "Limit|Max open files" /proc/18734/limits
> Limit Soft Limit Hard Limit Units
> Max open files 1024 4096 files
>
> [root@...]# grep -E "Limit|Max open files" /proc/18737/limits
> Limit Soft Limit Hard Limit Units
> Max open files *8192* *8192*
> files
>
> [root@...]# grep -E "Limit|Max open files" /proc/18740/limits
> Limit Soft Limit Hard Limit Units
> Max open files *8192**8192*
> files
>
> - both soft and hard nofile limits were increased for processes running
> under squid user
>
>
> I think, that the limits could be increased in tests b) and c) because
> the master process runs under the root user. Am I right or not?
AFAIK, Hard limit can be changed by root (or ulimit tool itself would
not work). Soft limit can be changed by any user to any value below hard
limit.
What you see in (c) is the master process changing the hard limit for
its spawned child processes so that they can use the value in squid.conf
without errors.
> Or need I to increase the limits for the master proccess too?
Not if squid is correctly setting the limits for you. Doing that
automatically is one of the reasons the master exists separately from
the workers. The init script use of ulimit is a workaround for builds
where rlimit() support is lacking or broken.
Amos
_______________________________________________
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
Thank you Amos for your clarification/explanation and confirmation of my presumptions.
About the NAT errors I'm going to write new email.
Do not set the limit to "0". That actually means *no* filedescriptors for the newer Squid versions.
Remove the directive entirely from your squid.conf for the default behaviour.
Thank you for the warning, I came up with the documentation for Squid 3.1.
Also "max_filedescriptors" is teh directive name. "max_filedesc" was only for the experimental RHEL patch many decades ago.
Yep, I made copy & paste error from the old RHEL 5.x page - https://access.redhat.com/solutions/63027 .
I use "max_filedescriptors" in my configuration of course.
Best regards
--
Karel Ziegler
Karel Ziegler
_______________________________________________ squid-users mailing list squid-users@xxxxxxxxxxxxxxxxxxxxx http://lists.squid-cache.org/listinfo/squid-users