On Wed, 9 Jun 2004, Feizhou wrote:
Is there any good reason not to load connection tracking?
SLOW. It isn't good enough to use on a high traffic server.
Could you back your claims up with data?
What kind of data?
I can tell you what I observed.
I have two dnscaches running dnscache. Single PIII 800 cpus with 512MB of RAM.
One box had the command iptables -t nat -L -n run and that caused ipt_conntrack to be loaded.
Instantly queries to that box took over 200ms to return (cached entries) and sometimes timeouts even occured while the other box happily kept return times to under 20ms for cached entries.
These are with a RH 2.4.20-20 with XFS patches applied.
At testing connection tracking we could pump trough two million concurrent connection at 200000pps rate with opening up 20000 new connection per second on a dual Xeon PC with Serverworks chipset and Intel copper GE cards. Best results were achieved by Linux kernel 2.6.x with conntrack locking and TCP window tracking patches applied and NAPI enabled. I'd say that's not bad at all.
Which tcp window tracking patches? On my mail gateways, I had 2.6.4 with e100 driver and NAPI enabled and that proved to be a disaster. I had to turn NAPI off and also muck around:
net.ipv4.tcp_max_syn_backlog = 2048 net.ipv4.route.gc_thresh = 65536
to keep the box accessible. Otherwise, the kernel would spew dst cache overflow/BUGTRAP errors or oops or even garbage.