On 4/29/24 11:09, Nicolas Bouchinet wrote:
Hi Vlastimil,
thanks for your review and your proposal.
On 4/29/24 10:52, Vlastimil Babka wrote:
On 4/25/24 5:14 PM, Chengming Zhou wrote:
On 2024/4/25 23:02, Nicolas Bouchinet wrote:
Thanks for finding the bug and the fix!
Hy,
First of all, thanks a lot for your time.
On 4/25/24 10:36, Chengming Zhou wrote:
On 2024/4/24 20:47, Nicolas Bouchinet wrote:
From: Nicolas Bouchinet<nicolas.bouchinet@xxxxxxxxxxx>
Commit 284f17ac13fe ("mm/slub: handle bulk and single object freeing
separately") splits single and bulk object freeing in two functions
slab_free() and slab_free_bulk() which leads slab_free() to call
slab_free_hook() directly instead of slab_free_freelist_hook().
Right.
y not suitable for a stable-candidate fix we need
If `init_on_free` is set, slab_free_hook() zeroes the object.
Afterward, if `slub_debug=F` and `CONFIG_SLAB_FREELIST_HARDENED` are
set, the do_slab_free() slowpath executes freelist consistency
checks and try to decode a zeroed freepointer which leads to a
"Freepointer corrupt" detection in check_object().
IIUC, the "freepointer" can be checked on the free path only when
it's outside the object memory. Here slab_free_hook() zeroed the
freepointer and caused the problem.
But why we should zero the memory outside the object_size? It seems
more reasonable to only zero the object_size when init_on_free is
set?
The original purpose was to avoid leaking information through the
object and its metadata / tracking information as described in
init_on_free initial Commit 6471384af2a6 ("mm: security: introduce
init_on_alloc=1 and init_on_free=1 boot options").
I have to admit I didn't read the entire lore about the original
patchset yet, though it could be interesting to know a bit more the
threat models, specifically regarding the object metadata init.
Thank you for the reference! I also don't get why it needs to zero
the metadata and tracking information.
Hmm taking a step back, it seems really suboptimal to initialize the
outside-object freepointer as part of init_on_free:
- the freeing itself will always set it one way or another, in this case
free_to_partial_list() will do set_freepointer() after
free_debug_processing()
- we lose the ability to detect if the allocated slab object's user
wrote to
it, which is a buffer overflow
So the best option to me would be to adjust the init in
slab_free_hook() to
avoid the outside-object freepointer similarly to how it avoids the
red zone.
We'll still not have the buffer overflow detection ability for bulk free
where slab_free_freelist_hook() will set the free pointer before we
reach
the checks, but changing that is most likely not worth the trouble, and
especially not suitable for a stable-candidate fix we need here.
It seems like a good alternative to me, I'll push a V2 patch with
those changes.
I help maintaining the Linux-Hardened patchset in which we have a slab
object canary feature that helps detecting overflows. It is located
just after the object freepointer.
I've tried a patch where the freepointer is avoided but it results in
the same bug. It seems that the commit 0f181f9fbea8bc7ea ("mm/slub.c:
init_on_free=1 should wipe freelist ptr for bulk allocations") inits the
freepointer on allocation if init_on_free is set in order to return a
clean initialized object to the caller.
The patch could also be optimized a bit by restricting
set_freepointer() call to the `CONFIG_SLAB_FREELIST_HARDENED`
option value.
Yeah. Maybe memcg_alloc_abort_single() needs this too.
Thanks.
Thanks again, Nicolas
Thanks.
Object's freepointer thus needs to be properly set using
set_freepointer() after init_on_free.
To reproduce, set `slub_debug=FU init_on_free=1 log_level=7` on the
command line of a kernel build with
`CONFIG_SLAB_FREELIST_HARDENED=y`.
dmesg sample log:
[ 10.708715]
=============================================================================
[ 10.710323] BUG kmalloc-rnd-05-32 (Tainted: G B T ):
Freepointer corrupt
[ 10.712695]
-----------------------------------------------------------------------------
[ 10.712695]
[ 10.712695] Slab 0xffffd8bdc400d580 objects=32 used=4
fp=0xffff9d9a80356f80
flags=0x200000000000a00(workingset|slab|node=0|zone=2)
[ 10.716698] Object 0xffff9d9a80356600 @offset=1536
fp=0x7ee4f480ce0ecd7c
[ 10.716698]
[ 10.716698] Bytes b4 ffff9d9a803565f0: 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 ................
[ 10.720703] Object ffff9d9a80356600: 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 ................
[ 10.720703] Object ffff9d9a80356610: 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 ................
[ 10.724696] Padding ffff9d9a8035666c: 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 ................
[ 10.724696] Padding ffff9d9a8035667c: 00 00 00
00 ....
[ 10.724696] FIX kmalloc-rnd-05-32: Object at
0xffff9d9a80356600 not freed
Signed-off-by: Nicolas Bouchinet<nicolas.bouchinet@xxxxxxxxxxx>
---
mm/slub.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/mm/slub.c b/mm/slub.c
index 3aa12b9b323d9..71dbff9ad8f17 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -4342,10 +4342,16 @@ static __fastpath_inline
void slab_free(struct kmem_cache *s, struct slab *slab, void
*object,
unsigned long addr)
{
+ bool init = false;
+
memcg_slab_free_hook(s, slab, &object, 1);
+ init = slab_want_init_on_free(s);
- if (likely(slab_free_hook(s, object,
slab_want_init_on_free(s))))
+ if (likely(slab_free_hook(s, object, init))) {
+ if (init)
+ set_freepointer(s, object, NULL);
do_slab_free(s, slab, object, object, 1, addr);
+ }
}
static __fastpath_inline
Thanks again for your review,
Nicolas