Observed performance with netfilter and ip_queue in the wild?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I'm in the process of trying to spec hardware for a very high-volume environment and trying to get some grasp for what kind of performance I can expect. My apologies if this is not the correct list for that sort of query, and any anecdotal information would be appreciated.

Here's what I'm working with:
* Building boxes to sit at the edge of the network and handle all traffic between the outside world (Internet) and the downstream servers. We're using IP forwarding (no NAT).

* CentOS 4 or 5 (whichever is recommended) with latest available updates.

* Dell 1950 machines with built-in BCM5708-based NICs using the bnx2 driver.

* We're probably going to use Intel Pro/1000 MT dual-port PCI-X card based on the 82545 chipset instead of Broadcom, because we've typically seen better performance and reliability from Intel Ethernet devices--looking for other opinions on this.

* Netfilter enabled with ip_conntrack--ipconntrack_netbios_ns (could remove if it impacts performances), xt_state (what's this?)--and ip_queue.

* A userland daemon hooking ip_queue to do DNS lookups on each SYN. It returns either ACCEPT or DROP to the kernel. The daemon is multi- threaded so that the DNS queries aren't serial.

* Expected load across the environment will be roughly 120,000,000 incoming TCP connection attempts per day, bursting up to 250,000,000 some days.

* These TCP connections are sending data to us. Outgoing TCP connections and outgoing data are expected to be very low in comparison to the inbound data.

* Even with caching, there may be roughly one outgoing DNS query per incoming connection--these should be (nearly) all UDP.

* Roughly 80% of the connection attempts will probably be intentionally DROPped. The remaining 20% will probably last less than a second on average, probably a few seconds at most. There should be roughly 10KB of data transfered per connection, up to a few MB in rare cases.

I'm trying to get an idea for roughly how much traffic we can reasonably expect one box to handle. How many SYNs could we expect a machine to inspect per hour, how many simultaneous TCP states could we expect to handle flowing traffic, etc.

Any input based on high-volume environments is very welcome.

PS For bonus points, could anyone offer directions on how to implement TCP RST or ICMP ADMINISTRATIVELY PROHIBITED from userland? I'd prefer to STEAL the packet and send an active rejection, rather than DROP it on the floor.

Thanks!

--
bk



--
To unsubscribe from this list: send the line "unsubscribe linux-net" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Netdev]     [Ethernet Bridging]     [Linux 802.1Q VLAN]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Git]     [Bugtraq]     [Yosemite News and Information]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux PCI]     [Linux Admin]     [Samba]

  Powered by Linux