It looks like I need to call init() after new() m_evpCtx = EVP_ENCODE_CTX_new(); EVP_EncodeInit(m_evpCtx); From: openssl-users <openssl-users-bounces@xxxxxxxxxxx>
On Behalf Of Floodeenjr, Thomas With the old init syntax in 1.0.2,
EVP_EncodeInit(&m_evpCtx);, m_evpCtx-> length is initialized to ‘48’. With the new syntax in 1.1.1,
m_evpCtx = EVP_ENCODE_CTX_new();, m_evpCtx-> length is initialized to ‘0. I believe this causes the while loop to loop forever until INT_MAX, thus overrunning my buffer. Why does EVP_ENCODE_CTX_new() initialize to ‘0’? How do I fix this problem? Thanks, -Tom From: openssl-users <openssl-users-bounces@xxxxxxxxxxx>
On Behalf Of Floodeenjr, Thomas Hello, We are in the process of migrating from 1.0.2g to 1.1.1d. We adjusted to the changes, we think, and everything compiles. Many things also execute correctly. We are currently seeing a crash in EVP_EncodeUpdate() after we process most of our data. (last line of the while loop, line 202,
*out =
'\0';)
while (inl
>= ctx->length && total <=
INT_MAX) { j = evp_encodeblock_int(ctx,
out,
in,
ctx->length);
in +=
ctx->length;
inl -=
ctx->length;
out += j; total += j;
if ((ctx->flags
& EVP_ENCODE_CTX_NO_NEWLINES) == 0) { *(out++)
= '\n'; total++; } *out
= '\0'; } > ModuleName.dll!EVP_EncodeUpdate(evp_Encode_Ctx_st * ctx, unsigned char * out, int * outl, const unsigned char * in, int inl) Line 202 C We call it the function like this: EVP_EncodeUpdate(m_evpCtx, &vTmpOut[0],
&nOutSize, &_vInData[0],
(int) nInSize); EVP_ENCODE_CTX *m_evpCtx; std::vector<unsigned
char> vTmpOut; int nOutSize; std::vector<unsigned
char> & _vInData; I know that
EVP_EncodeUpdate() is vastly different between 1.0.2 and 1.1.1. Is there a problem with me calling the function this way? It has worked for many years using 1.0.1. Any insight is appreciated. Thanks, -Tom |