Hi all, can anybody explain how the memory pressure code in tcp.c is supposed to work? On two boxes with 512MB and 1GB RAM, I see: 512MB# cat /proc/sys/net/ipv4/tcp_mem 15360 15872 16384 1024MB# cat /proc/sys/net/ipv4/tcp_mem 31744 32256 32768 At the beginning of tcp_mem_schedule(), the third value is used as a hard high water mark, compared against the total of all socket buffer memory allocation, counted in byte. Right? What is the rationale for that - or is it incorrect in the implementation or my understanding? The reason I'm looking into that code is this: I regularly do stress testing, using a homegrown minimal HTTP server, which alternatively serves 300 byte, 14kB, and 48kB web pages (all from RAM). The server is a single process poll() based thing (dual process balanced on SMP), and normally, for each individual connection, I have a single sequence of accept()/read()/write()/close() [note single write]. Now, with test7 the syscall sequence is the same, but all writes seem to block until the last segment has been shoved out. Formerly (e.g. with test1) the write just copied down into the socket send buffer, and returned within microseconds. Now the write takes 15ms (for the 14kB requests), and 50ms (for the 48kB requests) - all the while blocking the operation of the nice big poll loop server. I'll play with modifying tcp_mem - is that the right way, should this be fixed at source - or, alternatively, where is my big misunderstanding? best regards Patrick - : send the line "unsubscribe linux-net" in the body of a message to majordomo@vger.kernel.org