If I: a) enable TCP keepalive on a server, lower the default keepalive parameters, establish a connection from a client, physically disconnect the server network, monitor client socket read status with select() the select() indeed notices the client socket is dead in an interval consistent with the keepalive parameters. However, if I: b) enable TCP keepalive on a server, lower the default keepalive parameters, establish a connection from a client, physically disconnect the server network, periodically send data to the disconnected client socket <-- NEW monitor client socket read status with select() the select() is not able to see the client socket is dead. That is, it appears that writing to a socket associated with a physically disconnected network interferes with the keepalive algorithm. I would expect that the inactivity timer associated with the socket keepalive algorithm is reset upon actually receiving traffic from the remote socket. Is this the intended behavior of the TCP keepalive algorithm, or a bug? [kernel = 2.4.27 on i686] TIA, Jeff - To unsubscribe from this list: send the line "unsubscribe linux-net" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html