On 4/10/23 23:44, Eric Biggers wrote:
On Mon, Apr 10, 2023 at 03:39:59PM -0400, Sweet Tea Dorminy wrote:diff --git a/fs/crypto/keysetup.c b/fs/crypto/keysetup.c index 8b32200dbbc0..f07e3b9579cf 100644 --- a/fs/crypto/keysetup.c +++ b/fs/crypto/keysetup.c @@ -181,7 +181,11 @@ void fscrypt_destroy_prepared_key(struct super_block *sb, int fscrypt_set_per_file_enc_key(struct fscrypt_info *ci, const u8 *raw_key) { ci->ci_owns_key = true; - return fscrypt_prepare_key(&ci->ci_enc_key, raw_key, ci); + ci->ci_enc_key = kzalloc(sizeof(*ci->ci_enc_key), GFP_KERNEL); + if (!ci->ci_enc_key) + return -ENOMEM; + + return fscrypt_prepare_key(ci->ci_enc_key, raw_key, ci); }Any idea how much this will increase the per-inode memory usage by, in the per-file keys case? (Counting the overhead of the slab allocator.)
For just this change, the object gets allocated from the 8 or 16 byte slab cache, so it's just the 8 bytes of pointer, plus the slab cache overhead. The slab allocator, as far as I understand it, has 64-byte 'struct slab's, and for 8/16/32 byte allocations, those come out of page size slabs; so that's amortized over 512/256/128 objects. So that's 8 1/8th or 8 1/4th extra bytes from this change, I think.
Later changes bump the object up to the next size cache, 16 or 32, wasting 6 or 14 more bytes. I should probably add something imitating struct bio's bi_inline_vecs, adding a ci_inline_prepkey[] member for allocation only by per-file key infos.
- else if (ci->ci_owns_key) + else if (ci->ci_owns_key) { fscrypt_destroy_prepared_key(ci->ci_inode->i_sb, - &ci->ci_enc_key); + ci->ci_enc_key); + kfree(ci->ci_enc_key);Use kfree_sensitive() here, please. Yes, it's not actually needed here because the allocation doesn't contain the keys themselves. But I want to code defensively here.
Ah, I argued with myself on whether I needed it, happy to do so.