Hi, I think I know what the answer to this question is going to be, but I am hoping an answer from the experts in this group will resolve an argument. Assume I have an application protocol running over TCP where one host, lets call it the initiator, sends a message to its peer and the peer, lets call it the responder, sends back an application level acknowledgment for each original message. Each message sent by the initiator is 160 bytes long. That's the size of the written application buffer (e.g. send(sd, buf, 160, 0)). The socket has *not* been set to non-blocking. The initiator is structured such that it will send 400 of its 160 byte messages at rapid succession, but will not send anything else on the socket until it has received application level acknowledgements from the responder for all 400 messages. I don't want the initiator's calls to send() to block while the data is being sent to the responder. The question: Considering that 400 * 160 = 64000, if I execute setsockopt() on the initiator to specify a 64KB SO_SNDBUF on the socket, can I be guaranteed that the initiator's calls to send() will always return as quickly as scheduling allows without regard to how much data has actually been transmitted to the responder? Experimentation seems to indicate that the answer to this question is "no" and the documentation I've seen indicates that the reason may be that the send buffer is used by the kernel for storage of both application data and internal data required to maintain the send side of the TCP connection and that even though the kernel actually allocates 128KB in this case, there's no guarantee that 64KB of it is going to be used for the buffering of application data, but I am not sure if that is expected behavior. This is running on linux 2.6.14. Thanks in advance, Jeff Haran Brocade - To unsubscribe from this list: send the line "unsubscribe linux-net" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html