Quoting Jorgen Lundman <lundman@xxxxxxxxxxx>:
Appearently this patch only fixed my debug printk loop that used sg_next
from scatterlist API instead of scatterwalk_sg_next from scatterwalk API.
Sorry for the noise.
Thanks for looking at this. I think I am dealing with 2 problems, one is
that occasionally my buffers are from vmalloc, and needs to have some logic
using vmalloc_to_page(). But I don't know if ciphers should handle that
internally, blkcipher.c certainly seems to have several modes, although I
do not see how to *set* them.
From what I now researched, you must not pass vmalloc'd memory to
sg_set_buf() as it internally uses virt_to_page() to get page of
buffer address. You most likely need to walk through your vmalloc'd
buffer and pass all individual pages to scatterlist with sg_set_page().
Second problem is most likely what you were looking at. It is quite easy to
make the crypto code die.
For example, if I use "ccm(aes)" which can take the dst buffer, plus a hmac
buffer;
cipher = kmalloc( ciphersize, ...
hmac = kmalloc( 16, ...
sg_set_buf( &sg[0], cipher, ciphersize);
sg_set_buf( &sg[1], hmac, 16);
aead_encrypt()...
and all is well, but if you shift hmac address away from PAGE boundary, like:
hmac = kmalloc( 16 + 32, ...
hmac += 32;
sg_set_buf( &sg[1], hmac, 16);
ie, allocate a larger buffer, and put the pointer into the page a bit. And
it will die in scatterwalk very often. +32 isnt magical, any non-zero
number works.
This is strange as crypto subsystem's internal test mechanism uses
such offsetted buffers.
-Jussi
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html