On Thu, Dec 22, 2016 at 01:55:59PM +0530, Binoy Jayan wrote: > > > Support of bigger block sizes would be unsafe without additional mechanism that provides > > atomic writes of multiple sectors. Maybe it applies to 4k as well on some devices though...) > > Did you mean write to the crypto output buffers or the actual disk write? > I didn't quite understand how the block size for encryption affects atomic > writes as it is the block layer which handles them. As far as dm-crypt is, > concerned it just encrypts/decrypts a 'struct bio' instance and submits the IO > operation to the block layer. I think Milan's talking about increasing the real block size, which would obviously require the hardware to be able to write that out atomically, as otherwise it breaks the crypto. But if we can instead do the IV generation within the crypto API, then the block size won't be an issue at all. Because you can supply as many blocks as you want and they would be processed block-by-block. Now there is a disadvantage to this approach, and that is you have to wait for the whole thing to be encrypted before you can start doing the IO. I'm not sure how big a problem that is but if it is bad enough to affect performance, we can look into adding some form of partial completion to the crypto API. Cheers, -- Email: Herbert Xu <herbert@xxxxxxxxxxxxxxxxxxx> Home Page: http://gondor.apana.org.au/~herbert/ PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt -- To unsubscribe from this list: send the line "unsubscribe linux-crypto" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html