Re: [PATCH] slub: Fixes freepointer encoding for single free

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 4/29/24 16:52, Chengming Zhou wrote:
On 2024/4/29 22:32, Nicolas Bouchinet wrote:
On 4/29/24 15:35, Chengming Zhou wrote:
On 2024/4/29 20:59, Nicolas Bouchinet wrote:
On 4/29/24 11:09, Nicolas Bouchinet wrote:
Hi Vlastimil,

thanks for your review and your proposal.

On 4/29/24 10:52, Vlastimil Babka wrote:
On 4/25/24 5:14 PM, Chengming Zhou wrote:
On 2024/4/25 23:02, Nicolas Bouchinet wrote:
Thanks for finding the bug and the fix!

Hy,

First of all, thanks a lot for your time.

On 4/25/24 10:36, Chengming Zhou wrote:
On 2024/4/24 20:47, Nicolas Bouchinet wrote:
From: Nicolas Bouchinet<nicolas.bouchinet@xxxxxxxxxxx>

Commit 284f17ac13fe ("mm/slub: handle bulk and single object freeing
separately") splits single and bulk object freeing in two functions
slab_free() and slab_free_bulk() which leads slab_free() to call
slab_free_hook() directly instead of slab_free_freelist_hook().
Right.
y not suitable for a stable-candidate fix we need
If `init_on_free` is set, slab_free_hook() zeroes the object.
Afterward, if `slub_debug=F` and `CONFIG_SLAB_FREELIST_HARDENED` are
set, the do_slab_free() slowpath executes freelist consistency
checks and try to decode a zeroed freepointer which leads to a
"Freepointer corrupt" detection in check_object().
IIUC, the "freepointer" can be checked on the free path only when
it's outside the object memory. Here slab_free_hook() zeroed the
freepointer and caused the problem.

But why we should zero the memory outside the object_size? It seems
more reasonable to only zero the object_size when init_on_free is set?
The original purpose was to avoid leaking information through the object and its metadata / tracking information as described in init_on_free initial Commit 6471384af2a6 ("mm: security: introduce init_on_alloc=1 and init_on_free=1 boot options").

I have to admit I didn't read the entire lore about the original patchset yet, though it could be interesting to know a bit more the threat models, specifically regarding the object metadata init.
Thank you for the reference! I also don't get why it needs to zero
the metadata and tracking information.
Hmm taking a step back, it seems really suboptimal to initialize the
outside-object freepointer as part of init_on_free:

- the freeing itself will always set it one way or another, in this case
free_to_partial_list() will do set_freepointer() after free_debug_processing()

- we lose the ability to detect if the allocated slab object's user wrote to
it, which is a buffer overflow
Ah, right, this ability seems important for debugging overflow problem.

So the best option to me would be to adjust the init in slab_free_hook() to
avoid the outside-object freepointer similarly to how it avoids the red zone.
Agree.

We'll still not have the buffer overflow detection ability for bulk free
where slab_free_freelist_hook() will set the free pointer before we reach
the checks, but changing that is most likely not worth the trouble, and
especially not suitable for a stable-candidate fix we need here.
It seems like a good alternative to me, I'll push a V2 patch with those changes.

I help maintaining the Linux-Hardened patchset in which we have a slab object canary feature that helps detecting overflows. It is located just after the object freepointer.
I've tried a patch where the freepointer is avoided but it results in the same bug. It seems that the commit 0f181f9fbea8bc7ea ("mm/slub.c: init_on_free=1 should wipe freelist ptr for bulk allocations") inits the freepointer on allocation if init_on_free is set in order to return a clean initialized object to the caller.

Good catch! You may need to change maybe_wipe_obj_freeptr() too,
I haven't tested this, not sure whether it works for you. :)

diff --git a/mm/slub.c b/mm/slub.c
index 3e33ff900d35..3f250a167cb5 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3796,7 +3796,8 @@ static void *__slab_alloc_node(struct kmem_cache *s,
   static __always_inline void maybe_wipe_obj_freeptr(struct kmem_cache *s,
                                                     void *obj)
   {
-       if (unlikely(slab_want_init_on_free(s)) && obj)
+       if (unlikely(slab_want_init_on_free(s)) && obj &&
+           !freeptr_outside_object(s))
                  memset((void *)((char *)kasan_reset_tag(obj) + s->offset),
                          0, sizeof(void *));
   }

Thanks!
Indeed since check_object() avoids objects for which freepointer is in the object and since val is equal to SLUB_RED_ACTIVE in our specific case it should work. Do you want me to add you as Co-authored ?

Ok, it's great. Thanks!

Now I think of it, doesn't it seems a bit odd to only properly init_on_free object's freepointer only if it's inside the object ? IMHO it is equally necessary to avoid information leaking about the freepointer whether it is inside or outside the object. I think it break the semantic of the commit 0f181f9fbea8bc7ea ("mm/slub.c: init_on_free=1 should wipe freelist ptr for bulk allocations") ?

Thanks.





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux