Jonathan Ellis wrote:
I am running into a limit of 64 queued datagrams. This isn't a
data buffer size; varying the size of the datagram makes no difference
in the observed queue size. If more datagrams are sent before some are
read, they are silently dropped. (By "silently," I mean, "tcpdump
doesn't record these as dropped packets.") This is a problem because
while my consumer can handle the overall load easily, the requests
often come in large bursts.
This only happens when the sending and receiving processes are on
different machines. Running on the same machine yields a higher queue
length (even if you avoid loopback).
Can anyone tell me where this magic 64 number comes from, so I can
increase it?
It looks like (perhaps obviously, given how hard the network code tries
to avoid copies) when a datagram is received over ethernet, the entire
payload area of the frame is put onto the socket receive buffer, so even
a small datagram takes up the same amount of space as a large one.
So what looks like a queue-length limit here is really a receive buffer
limit; there is no queue limit per se. Increasing rmem_max and setting
SO_RCVBUF is thus the solution.
Thanks to Jeff Haran for his help off-list.
-Jonathan
-
: send the line "unsubscribe linux-net" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html