Re: OpenSSL hash memory leak

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
This is how I have initialized my variables:-

EVP_MD_CTX *mdctx;
const EVP_MD *md;
int i;
HASH hash_data;
unsigned char message_data[BUFFER_SIZE];

BUFFER_SIZE has been defined as 131072
and HASH is my hash structure defined to hold the message digest, message digest length and message digest type

On Sat, 23 Feb 2019 at 00:17, Jordan Brown <openssl@xxxxxxxxxxxxxxxxxxxx> wrote:
The most obvious question is "how are you allocating your message_data buffer?".  You don't show that.

On 2/22/2019 2:27 AM, prithiraj das wrote:

Hi All,

Using OpenSSL 1.0.2g, I have written a code to generate the hash of a file in an embeddded device having linux OS and low memory capacity and the files are generally of size 44 MB or more. The first time or even the second time on some occasions, the hash of any file is successfully generated. On the 3rd or 4th time (possibly due to lack of memory/memory leak), the system reboots before the hash can be generated.  After restart, the same thing happens when the previous steps are repeated.
The stats below shows the memory usage before and after computing the hash. 

root@at91sam9m10g45ek:~# free
                      total        used          free         shared    buff/cache   available
Mem:         252180       13272      223048         280          15860          230924
Swap:                0           0               0

After computing hash :-
root@at91sam9m10g45ek:~# free
                      total        used          free       shared    buff/cache   available
Mem:         252180       13308      179308        280       59564           230868
Swap:             0                0              0

Buff/cache increases by almost 44MB (same as file size) everytime I generate the hash and free decreases. I believe the file is being loaded into buffer and not being freed. 

I am using the below code to compute the message digest. This code is part of a function ComputeHash and the file pointer here is fph.

   EVP_add_digest(EVP_sha256());
 md = EVP_get_digestbyname("sha256");
 
 if(!md) {
        printf("Unknown message digest \n");
        exit(1);
 }
 printf("Message digest algorithm successfully loaded\n");
 mdctx = EVP_MD_CTX_create();
 EVP_DigestInit_ex(mdctx, md, NULL);

 // Reading data to array of unsigned chars
 long long int bytes_read = 0;

 printf("FILE size of the file to be hashed is %ld",filesize);

 //reading image file in chunks below and fph is the file pointer to the 44MB file
 while ((bytes_read = fread (message_data, 1, BUFFER_SIZE, fph)) != 0)
  EVP_DigestUpdate(mdctx, message_data, bytes_read);
 EVP_DigestFinal_ex(mdctx, hash_data.md_value, &hash_data.md_len);
 printf("\n%d\n",EVP_MD_CTX_size(mdctx));
 printf("\n%d\n",EVP_MD_CTX_type(mdctx));
 hash_data.md_type=EVP_MD_CTX_type(mdctx);
 EVP_MD_CTX_destroy(mdctx);
 //fclose(fp);
 printf("Generated Digest is:\n ");
 for(i = 0; i < hash_data.md_len; i++)
        printf("%02x", hash_data.md_value[i]);
 printf("\n");
 EVP_cleanup();
         return hash_data;

In the the code below, I have done fclose(fp)
verify_hash=ComputeHash(fp,size1);
fclose(fp);

I believe that instead of loading the entire file all at once I am reading the 44MB file in chunks and computing the hash using the piece of code below: (fph is the file pointer)
while ((bytes_read = fread (message_data, 1, BUFFER_SIZE, fph)) != 0)
  EVP_DigestUpdate(mdctx, message_data, bytes_read);

Where I am going wrong? How can I free the buff/cache after computation of message digest?  Please suggest ways to tackle this.


Thanks and Regards,
Prithiraj


-- 
Jordan Brown, Oracle ZFS Storage Appliance, Oracle Solaris

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux