Dmitry Andrianov <dmitry.andrianov@xxxxxxxxxxx> wrote: > I have a single rule in iptables: > > -A INPUT -p tcp -m tcp --dport 7777 -m conntrack --ctstate NEW -m connlimit > --connlimit-above 2000 --connlimit-mask 0 -j REJECT --reject-with tcp-reset > > and a test server and client both written in Ruby that just set up loads of > connections. > > The rule above works just fine when client runs in a single thread, that is > all connections are established sequentially. The client stops when it gets > an error attempting to establish another connection (but does not exit to > prevent already established from being closed by OS). netstat on the server > after client stopped: > > $ netstat -nat | grep :7777 | tr -s ' ' | cut -d' ' -f6 | sort | uniq -c > 2000 ESTABLISHED > 1 LISTEN > > However when I use 2 threads in the client, the total number of successfully > established connections is higher: > > $ netstat -nat | grep :7777 | tr -s ' ' | cut -d' ' -f6 | sort | uniq -c > 2070 ESTABLISHED > 1 LISTEN > > More threads I add to the client - the bigger that extra, unplanned, > "allowance". 4 threads give about 2755 established connections which is > already A LOT above the limit. With 5 threads, it successfully established 3 > times the limit. These are connections from the same source IP to the same > destination IP:port... Would be interesting to see conntrack -L --p tcp dport 22 --state ESTABLISHED | wc -l and see what conntrack state actually is. > It is iptables v1.4.21 / conntrack v1.4.1 on Ubuntu 14.04 running on AWS > m5.xlarge instance > > Linux ip-10-0-136-35 3.13.0-141-generic #190-Ubuntu SMP Fri Jan 19 12:52:38 > UTC 2018 x86_64 x86_64 x86_64 GNU/Linux AFAICS 3.13 still has a single spinlock in xt_connlimit so I have no explanation why multiple threads would make a difference. > So I guess my big question is - what is going on there? :) Is it some known > issue that has already been addressed? Not to my knowledge. -- To unsubscribe from this list: send the line "unsubscribe netfilter" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html