The patch titled Subject: kasan: catch invalid free before SLUB reinitializes the object has been added to the -mm mm-unstable branch. Its filename is kasan-catch-invalid-free-before-slub-reinitializes-the-object.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/kasan-catch-invalid-free-before-slub-reinitializes-the-object.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Jann Horn <jannh@xxxxxxxxxx> Subject: kasan: catch invalid free before SLUB reinitializes the object Date: Thu, 08 Aug 2024 20:30:45 +0200 Patch series "allow KASAN to detect UAF in SLAB_TYPESAFE_BY_RCU slabs", v7. The purpose of the series is to allow KASAN to detect use-after-free access in SLAB_TYPESAFE_BY_RCU slab caches, by essentially making them behave as if the cache was not SLAB_TYPESAFE_BY_RCU but instead every kfree() in the cache was a kfree_rcu(). This is gated behind a config flag that is supposed to only be enabled in fuzzing/testing builds where the performance impact doesn't matter. Output of the new kunit testcase I added to the KASAN test suite: ================================================================== BUG: KASAN: slab-use-after-free in kmem_cache_rcu_uaf+0x3ae/0x4d0 Read of size 1 at addr ffff888106224000 by task kunit_try_catch/224 CPU: 7 PID: 224 Comm: kunit_try_catch Tainted: G B N 6.10.0-00003-g065427d4b87f #430 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Call Trace: <TASK> dump_stack_lvl+0x53/0x70 print_report+0xce/0x670 [...] kasan_report+0xa5/0xe0 [...] kmem_cache_rcu_uaf+0x3ae/0x4d0 [...] kunit_try_run_case+0x1b3/0x490 [...] kunit_generic_run_threadfn_adapter+0x80/0xe0 kthread+0x2a5/0x370 [...] ret_from_fork+0x34/0x70 [...] ret_from_fork_asm+0x1a/0x30 </TASK> Allocated by task 224: kasan_save_stack+0x33/0x60 kasan_save_track+0x14/0x30 __kasan_slab_alloc+0x6e/0x70 kmem_cache_alloc_noprof+0xef/0x2b0 kmem_cache_rcu_uaf+0x10d/0x4d0 kunit_try_run_case+0x1b3/0x490 kunit_generic_run_threadfn_adapter+0x80/0xe0 kthread+0x2a5/0x370 ret_from_fork+0x34/0x70 ret_from_fork_asm+0x1a/0x30 Freed by task 0: kasan_save_stack+0x33/0x60 kasan_save_track+0x14/0x30 kasan_save_free_info+0x3b/0x60 __kasan_slab_free+0x57/0x80 slab_free_after_rcu_debug+0xe3/0x220 rcu_core+0x676/0x15b0 handle_softirqs+0x22f/0x690 irq_exit_rcu+0x84/0xb0 sysvec_apic_timer_interrupt+0x6a/0x80 asm_sysvec_apic_timer_interrupt+0x1a/0x20 Last potentially related work creation: kasan_save_stack+0x33/0x60 __kasan_record_aux_stack+0x8e/0xa0 kmem_cache_free+0x10c/0x420 kmem_cache_rcu_uaf+0x16e/0x4d0 kunit_try_run_case+0x1b3/0x490 kunit_generic_run_threadfn_adapter+0x80/0xe0 kthread+0x2a5/0x370 ret_from_fork+0x34/0x70 ret_from_fork_asm+0x1a/0x30 The buggy address belongs to the object at ffff888106224000 which belongs to the cache test_cache of size 200 The buggy address is located 0 bytes inside of freed 200-byte region [ffff888106224000, ffff8881062240c8) The buggy address belongs to the physical page: page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x106224 head: order:1 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0 flags: 0x200000000000040(head|node=0|zone=2) page_type: 0xffffefff(slab) raw: 0200000000000040 ffff88810621c140 dead000000000122 0000000000000000 raw: 0000000000000000 00000000801f001f 00000001ffffefff 0000000000000000 head: 0200000000000040 ffff88810621c140 dead000000000122 0000000000000000 head: 0000000000000000 00000000801f001f 00000001ffffefff 0000000000000000 head: 0200000000000001 ffffea0004188901 ffffffffffffffff 0000000000000000 head: 0000000000000002 0000000000000000 00000000ffffffff 0000000000000000 page dumped because: kasan: bad access detected Memory state around the buggy address: ffff888106223f00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc ffff888106223f80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc >ffff888106224000: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ^ ffff888106224080: fb fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc ffff888106224100: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc ================================================================== ok 38 kmem_cache_rcu_uaf This patch (of 7): Currently, when KASAN is combined with init-on-free behavior, the initialization happens before KASAN's "invalid free" checks. More importantly, a subsequent commit will want to RCU-delay the actual SLUB freeing of an object, and we'd like KASAN to still validate synchronously that freeing the object is permitted. (Otherwise this change will make the existing testcase kmem_cache_invalid_free fail.) So add a new KASAN hook that allows KASAN to pre-validate a kmem_cache_free() operation before SLUB actually starts modifying the object or its metadata. Inside KASAN, this: - moves checks from poison_slab_object() into check_slab_allocation() - moves kasan_arch_is_ready() up into callers of poison_slab_object() - removes "ip" argument of poison_slab_object() and __kasan_slab_free() (since those functions no longer do any reporting) Link: https://lkml.kernel.org/r/20240808-kasan-tsbrcu-v7-0-0d0590c54ae6@xxxxxxxxxx Link: https://lkml.kernel.org/r/20240808-kasan-tsbrcu-v7-1-0d0590c54ae6@xxxxxxxxxx Signed-off-by: Jann Horn <jannh@xxxxxxxxxx> Acked-by: Vlastimil Babka <vbabka@xxxxxxx> #slub Reviewed-by: Andrey Konovalov <andreyknvl@xxxxxxxxx> Cc: Alexander Potapenko <glider@xxxxxxxxxx> Cc: Andrey Ryabinin <ryabinin.a.a@xxxxxxxxx> Cc: Christoph Lameter <cl@xxxxxxxxx> Cc: David Rientjes <rientjes@xxxxxxxxxx> Cc: David Sterba <dsterba@xxxxxxx> Cc: Dmitry Vyukov <dvyukov@xxxxxxxxxx> Cc: Hyeonggon Yoo <42.hyeyoo@xxxxxxxxx> Cc: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> Cc: Marco Elver <elver@xxxxxxxxxx> Cc: Pekka Enberg <penberg@xxxxxxxxxx> Cc: Roman Gushchin <roman.gushchin@xxxxxxxxx> Cc: Vincenzo Frascino <vincenzo.frascino@xxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- include/linux/kasan.h | 54 +++++++++++++++++++++++++++++++++-- mm/kasan/common.c | 61 +++++++++++++++++++++++----------------- mm/slub.c | 7 ++++ 3 files changed, 94 insertions(+), 28 deletions(-) --- a/include/linux/kasan.h~kasan-catch-invalid-free-before-slub-reinitializes-the-object +++ a/include/linux/kasan.h @@ -178,13 +178,55 @@ static __always_inline void * __must_che return (void *)object; } -bool __kasan_slab_free(struct kmem_cache *s, void *object, - unsigned long ip, bool init); +bool __kasan_slab_pre_free(struct kmem_cache *s, void *object, + unsigned long ip); +/** + * kasan_slab_pre_free - Check whether freeing a slab object is safe. + * @object: Object to be freed. + * + * This function checks whether freeing the given object is safe. It may + * check for double-free and invalid-free bugs and report them. + * + * This function is intended only for use by the slab allocator. + * + * @Return true if freeing the object is unsafe; false otherwise. + */ +static __always_inline bool kasan_slab_pre_free(struct kmem_cache *s, + void *object) +{ + if (kasan_enabled()) + return __kasan_slab_pre_free(s, object, _RET_IP_); + return false; +} + +bool __kasan_slab_free(struct kmem_cache *s, void *object, bool init); +/** + * kasan_slab_free - Poison, initialize, and quarantine a slab object. + * @object: Object to be freed. + * @init: Whether to initialize the object. + * + * This function informs that a slab object has been freed and is not + * supposed to be accessed anymore, except for objects in + * SLAB_TYPESAFE_BY_RCU caches. + * + * For KASAN modes that have integrated memory initialization + * (kasan_has_integrated_init() == true), this function also initializes + * the object's memory. For other modes, the @init argument is ignored. + * + * This function might also take ownership of the object to quarantine it. + * When this happens, KASAN will defer freeing the object to a later + * stage and handle it internally until then. The return value indicates + * whether KASAN took ownership of the object. + * + * This function is intended only for use by the slab allocator. + * + * @Return true if KASAN took ownership of the object; false otherwise. + */ static __always_inline bool kasan_slab_free(struct kmem_cache *s, void *object, bool init) { if (kasan_enabled()) - return __kasan_slab_free(s, object, _RET_IP_, init); + return __kasan_slab_free(s, object, init); return false; } @@ -374,6 +416,12 @@ static inline void *kasan_init_slab_obj( { return (void *)object; } + +static inline bool kasan_slab_pre_free(struct kmem_cache *s, void *object) +{ + return false; +} + static inline bool kasan_slab_free(struct kmem_cache *s, void *object, bool init) { return false; --- a/mm/kasan/common.c~kasan-catch-invalid-free-before-slub-reinitializes-the-object +++ a/mm/kasan/common.c @@ -208,15 +208,12 @@ void * __must_check __kasan_init_slab_ob return (void *)object; } -static inline bool poison_slab_object(struct kmem_cache *cache, void *object, - unsigned long ip, bool init) +/* Returns true when freeing the object is not safe. */ +static bool check_slab_allocation(struct kmem_cache *cache, void *object, + unsigned long ip) { - void *tagged_object; - - if (!kasan_arch_is_ready()) - return false; + void *tagged_object = object; - tagged_object = object; object = kasan_reset_tag(object); if (unlikely(nearest_obj(cache, virt_to_slab(object), object) != object)) { @@ -224,37 +221,46 @@ static inline bool poison_slab_object(st return true; } - /* RCU slabs could be legally used after free within the RCU period. */ - if (unlikely(cache->flags & SLAB_TYPESAFE_BY_RCU)) - return false; - if (!kasan_byte_accessible(tagged_object)) { kasan_report_invalid_free(tagged_object, ip, KASAN_REPORT_DOUBLE_FREE); return true; } + return false; +} + +static inline void poison_slab_object(struct kmem_cache *cache, void *object, + bool init) +{ + void *tagged_object = object; + + object = kasan_reset_tag(object); + + /* RCU slabs could be legally used after free within the RCU period. */ + if (unlikely(cache->flags & SLAB_TYPESAFE_BY_RCU)) + return; + kasan_poison(object, round_up(cache->object_size, KASAN_GRANULE_SIZE), KASAN_SLAB_FREE, init); if (kasan_stack_collection_enabled()) kasan_save_free_info(cache, tagged_object); +} - return false; +bool __kasan_slab_pre_free(struct kmem_cache *cache, void *object, + unsigned long ip) +{ + if (!kasan_arch_is_ready() || is_kfence_address(object)) + return false; + return check_slab_allocation(cache, object, ip); } -bool __kasan_slab_free(struct kmem_cache *cache, void *object, - unsigned long ip, bool init) +bool __kasan_slab_free(struct kmem_cache *cache, void *object, bool init) { - if (is_kfence_address(object)) + if (!kasan_arch_is_ready() || is_kfence_address(object)) return false; - /* - * If the object is buggy, do not let slab put the object onto the - * freelist. The object will thus never be allocated again and its - * metadata will never get released. - */ - if (poison_slab_object(cache, object, ip, init)) - return true; + poison_slab_object(cache, object, init); /* * If the object is put into quarantine, do not let slab put the object @@ -504,11 +510,16 @@ bool __kasan_mempool_poison_object(void return true; } - if (is_kfence_address(ptr)) - return false; + if (is_kfence_address(ptr) || !kasan_arch_is_ready()) + return true; slab = folio_slab(folio); - return !poison_slab_object(slab->slab_cache, ptr, ip, false); + + if (check_slab_allocation(slab->slab_cache, ptr, ip)) + return false; + + poison_slab_object(slab->slab_cache, ptr, false); + return true; } void __kasan_mempool_unpoison_object(void *ptr, size_t size, unsigned long ip) --- a/mm/slub.c~kasan-catch-invalid-free-before-slub-reinitializes-the-object +++ a/mm/slub.c @@ -2227,6 +2227,13 @@ bool slab_free_hook(struct kmem_cache *s return false; /* + * Give KASAN a chance to notice an invalid free operation before we + * modify the object. + */ + if (kasan_slab_pre_free(s, x)) + return false; + + /* * As memory initialization might be integrated into KASAN, * kasan_slab_free and initialization memset's must be * kept together to avoid discrepancies in behavior. _ Patches currently in -mm which might be from jannh@xxxxxxxxxx are mm-fix-harmless-type-confusion-in-lock_vma_under_rcu.patch kasan-catch-invalid-free-before-slub-reinitializes-the-object.patch slub-introduce-config_slub_rcu_debug.patch