(Posted a few days ago to c.os.l.networking; no replies there.) I seem to be running into a limit of 64 queued datagrams. This isn't a data buffer size; varying the size of the datagram makes no difference in the observed queue size. If more datagrams are sent before some are read, they are silently dropped. (By "silently," I mean, "tcpdump doesn't record these as dropped packets.") This only happens when the sending and receiving processes are on different machines, btw. Can anyone tell me where this magic 64 number comes from, so I can increase it? Python demo attached. -Jonathan # <receive udp requests> # start this, then immediately start the other # _on another machine_ import socket, time sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) sock.bind(('', 3001)) time.sleep(5) while True: data, client_addr = sock.recvfrom(8192) print data # <separate process to send stuff> import socket for i in range(200): sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) sock.sendto('a' * 100, 0, ('***other machine ip***', 3001)) sock.close() - : send the line "unsubscribe linux-net" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html