On Aug 26, 2009, at 11:56 AM, Bob McConnell wrote:
From: Philip Thompson
During a socket read, why would all the requested number of bytes not
get sent? For example, I request 1000 bytes:
<?php
$data = @socket_read ($socket, 2048, PHP_BINARY_READ);
?>
This is actually in a loop, so I can get all the data if split up.
So,
for example, here's how the data split up in 3 iterations (for 1000
bytes):
650 bytes
200 bytes
150 bytes
But if I can accept up to 2048 bytes per socket read, why would it
not
pull all 1000 bytes initially in 1 step? Any thoughts on this would
be
greatly appreciated!
Because that's the way TCP/IP works, by design. TCP is a stream
protocol. It guarantees all of the bytes written to one end of the
pipe
will come out the other end in the same order, but not necessarily in
the same groupings. There are a number of buffers along the way that
might split them up, as well as limits on packet sizes in the various
networks it passed through. So you get what is available in the last
buffer when a timer expires, no more, and no less.
If you have serialized data that needs to be grouped in specific
blocks,
your application will need to keep track of those blocks, reassembling
or splitting the streamed data as necessary. You could use UDP which
does guarantee that packets will be kept together, but that protocol
doesn't guarantee delivery.
Bob McConnell
Thank you for your input.
Is it guaranteed that at least 1 byte will be sent each time? For
example, if I know the data length...
<?php
$input = '';
for ($i=0; $i<$dataLength; $i++) {
// Read 1 byte at a time
if (($data = @socket_read ($socket, 1, PHP_BINARY_READ)) !==
false) {
$input .= $data;
}
}
return $input;
?>
Or is this a completely unreasonable and unnecessary way to get the
data?
Thanks,
~Philip
--
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php