Am 26.04.2016 um 18:12 schrieb Hanno B?ck: > Thanks for both your answers, that was very helpful (although it > probably means what I'm trying to do is more complicated than I > thought)... > > One more question you might be able to answer: > When I run my test code and connect to google.com I get the following > bytes read for each BIO_read call: > 1024 > 365 > 289 > > When I run these against my own server (relatively standard > apache2.4+openssl setup) I get very different numbers: > 240 > 287 > 2 > 588 > 2 > 41 > 2 > 115 > 2 > 12 > 2 > 110 > 2 > 69 > 2 > 20 > 2 > 6 > 2 > 34 > 2 > 17 > 2 > 12 > 2 > 37 > 2 > 290 > 2 > 6 > 5 > > Why is this so much more split up? And to what correspond these > BIO_read chunks on the protocol level? Are these TLS records? TCP > packets? Is there something horribly wrong with my server config > because it splits them up in so many small parts? The second pattern looks like "Transfer-Encoding: chunked". In this mode, a response is sent in chunks and each chunk is preceded by a hex number telling how big the next chunk is. The last chunk is followed by a "0" indicating no more chunks are expected. So the "2" is the size of the chunk size (two hex digits), next comes the chunk itself. That sort of encoding is typically used for dynamic content, when the final size of the response is not known in advance to avoid needing to buffer the whole response before sending it. It does not use a content-length header. Another case might be a transformation during response delivery that changes the size in a way that is not easy to calculate in advance, like compression. Since it is a bit of pattern guessing, you should check this by looking at the http response headers. Still one could ask whether it is actually efficient to send the response in such small parts, but that's more a question on the sender. Regards, Rainer