I've noticed other stability issues with tcp_wmem. If I run a test that establishes 30k simultaneous TCP connections (all transferring at the same time), there are some situations where the aggregate size of the TX queues chews up all of memory, causing the kernel to mass-kill processes. This is not supposed to happen, since the aggregate buffer size should be bounded by the bandwidth-delay product. I've seen this on 2.4.18 & 2.4.22, both vanilla and Fedora kernels, and on a variety of machines. Dropping tcp_wmem from the default values would probably work around this problem.On Sat, 12 Jun 2004 23:46:52 +0100 (WEST) "Marcos D. Marado Torres" <marado@student.dei.uc.pt> wrote:
As we can read on Documentation/networking/ip-sysctl.txt, /proc/sys/net/ipv4/tcp_rmem is a vercor of 3 integers (min, default and max) which defines the amount of memory reserved/allowed for read buffers for TCP sockets.
If we `echo "0 0 0" > /proc/sys/net/ipv4/rcp_rmem`,
The system administrator can shoot himself in the foot if he wants
to.
Of course, this test is rather unrealistic: the server performs no admission control on the number of connections. So it's debatable whether the kernel needs to worry about this problem.
Alan - : send the line "unsubscribe linux-net" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html