This may just be a stupid question, but I think it could be a bug. Someone non-authoritative told me it isn't, but I didn't understand their explanation, so I turn to you. Let's say I have a socket with SO_RCVTIMEO set to 500ms. I call read. There is no data available to be read from the socket. Not at the time that I call read, or... *ever*. But the socket's not in an error condition or anything, there's just nothing to be read. Which behavior should I expect: A) read blocks for slightly longer than 500ms, then gives up B) read blocks for a time that may be significantly more or even significantly less than 500ms, then gives up C) read does not have to block at all, the timeout is a lie Most UNIXes I work with implement A. Empirically, I've determined Linux does B. My friend who tells me this isn't a bug believes the real answer is C. Who's right? Also, if I want to make sure to try a read for at least 500ms, what is the right way to do it? Loops? Adding a magic number to the timeout? Now that I've framed the philosophical issue, here is the code:
Attachment:
test.c
Description: Binary data
To run: clang test.c ./a.out 5551 & telnet localhost 5551 Results on UNIX: time: 0.500479 result: -1 time: 0.501013 result: -1 time: 0.500908 result: -1 time: 0.500487 result: -1 time: 0.500328 result: -1 time: 0.501100 result: -1 Results on Linux: time: 0.499556 result: -1 time: 0.501964 result: -1 time: 0.496663 result: -1 time: 0.502346 result: -1 time: 0.499642 result: -1 time: 0.497158 result: -1 time: 0.498826 result: -1 time: 0.501630 result: -1 time: 0.497540 result: -1