Thanks, that makes sense. My ability to optimise is constrained - the system is a product so I do not know what the actual pattern of usage will be. But there is a limit on buffer size within the system. It's a defined symbol, so can be altered from the default of 32 KB, but only by recompiling the system. I rely on a working assumption that people who change definitions and recompile know what they're doing. The system is threaded, but it is designed to operate with a relatively small number of highly active threads, so grabbing 32 KB on the stack for a short period shouldn't be too much of an issue. It would be much harder to figure out the actual message size because the calls to SSL are taking place in a generic core, whereas the protocol is in a different layer of code. There are ways it could be done, but I'm inclined to leave that for a future optimisation. That leaves me feeling that the fixed buffer on the stack is the cleanest solution, involving simple code. The copying overhead is there, but looks hard to eliminate, and as you say there is plenty of other overhead. I'm not sure that the small initial buffer offers me much gain, although it might help in some situations. (Personally I'm inclined to use SSH tunnels rather than SSL for SQL traffic, but that's another story!). One remaining point leaves me uncertain. Supposing an SSL write gets the response SSL_ERROR_WANT_READ. Then there is a POLLIN event. I take it the first thing that must happen is a retry of the write. Assuming that works, do I need to assume that there could be data to be read? Or will a further event occur, so that I should return to looking out for events? I guess the answer to the last question is probably no, but am unsure. -- View this message in context: http://openssl.6102.n7.nabble.com/Find-size-of-available-data-prior-to-ssl-read-tp61722p61741.html Sent from the OpenSSL - User mailing list archive at Nabble.com.