The patch titled Subject: kasan: introduce kasan_mempool_poison_pages has been added to the -mm mm-unstable branch. Its filename is kasan-introduce-kasan_mempool_poison_pages.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/kasan-introduce-kasan_mempool_poison_pages.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Andrey Konovalov <andreyknvl@xxxxxxxxxx> Subject: kasan: introduce kasan_mempool_poison_pages Date: Tue, 19 Dec 2023 23:28:50 +0100 Introduce and document a kasan_mempool_poison_pages hook to be used by the mempool code instead of kasan_poison_pages. Compated to kasan_poison_pages, the new hook: 1. For the tag-based modes, skips checking and poisoning allocations that were not tagged due to sampling. 2. Checks for double-free and invalid-free bugs. In the future, kasan_poison_pages can also be updated to handle #2, but this is out-of-scope of this series. Link: https://lkml.kernel.org/r/88dc7340cce28249abf789f6e0c792c317df9ba5.1703024586.git.andreyknvl@xxxxxxxxxx Signed-off-by: Andrey Konovalov <andreyknvl@xxxxxxxxxx> Cc: Alexander Lobakin <alobakin@xxxxx> Cc: Alexander Potapenko <glider@xxxxxxxxxx> Cc: Andrey Ryabinin <ryabinin.a.a@xxxxxxxxx> Cc: Breno Leitao <leitao@xxxxxxxxxx> Cc: Dmitry Vyukov <dvyukov@xxxxxxxxxx> Cc: Evgenii Stepanov <eugenis@xxxxxxxxxx> Cc: Marco Elver <elver@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- include/linux/kasan.h | 27 +++++++++++++++++++++++++++ mm/kasan/common.c | 23 +++++++++++++++++++++++ 2 files changed, 50 insertions(+) --- a/include/linux/kasan.h~kasan-introduce-kasan_mempool_poison_pages +++ a/include/linux/kasan.h @@ -212,6 +212,29 @@ static __always_inline void * __must_che return (void *)object; } +bool __kasan_mempool_poison_pages(struct page *page, unsigned int order, + unsigned long ip); +/** + * kasan_mempool_poison_pages - Check and poison a mempool page allocation. + * @page: Pointer to the page allocation. + * @order: Order of the allocation. + * + * This function is intended for kernel subsystems that cache page allocations + * to reuse them instead of freeing them back to page_alloc (e.g. mempool). + * + * This function is similar to kasan_mempool_poison_object() but operates on + * page allocations. + * + * Return: true if the allocation can be safely reused; false otherwise. + */ +static __always_inline bool kasan_mempool_poison_pages(struct page *page, + unsigned int order) +{ + if (kasan_enabled()) + return __kasan_mempool_poison_pages(page, order, _RET_IP_); + return true; +} + bool __kasan_mempool_poison_object(void *ptr, unsigned long ip); /** * kasan_mempool_poison_object - Check and poison a mempool slab allocation. @@ -326,6 +349,10 @@ static inline void *kasan_krealloc(const { return (void *)object; } +static inline bool kasan_mempool_poison_pages(struct page *page, unsigned int order) +{ + return true; +} static inline bool kasan_mempool_poison_object(void *ptr) { return true; --- a/mm/kasan/common.c~kasan-introduce-kasan_mempool_poison_pages +++ a/mm/kasan/common.c @@ -426,6 +426,29 @@ void * __must_check __kasan_krealloc(con return ____kasan_kmalloc(slab->slab_cache, object, size, flags); } +bool __kasan_mempool_poison_pages(struct page *page, unsigned int order, + unsigned long ip) +{ + unsigned long *ptr; + + if (unlikely(PageHighMem(page))) + return true; + + /* Bail out if allocation was excluded due to sampling. */ + if (!IS_ENABLED(CONFIG_KASAN_GENERIC) && + page_kasan_tag(page) == KASAN_TAG_KERNEL) + return true; + + ptr = page_address(page); + + if (check_page_allocation(ptr, ip)) + return false; + + kasan_poison(ptr, PAGE_SIZE << order, KASAN_PAGE_FREE, false); + + return true; +} + bool __kasan_mempool_poison_object(void *ptr, unsigned long ip) { struct folio *folio; _ Patches currently in -mm which might be from andreyknvl@xxxxxxxxxx are kasan-rename-kasan_slab_free_mempool-to-kasan_mempool_poison_object.patch kasan-move-kasan_mempool_poison_object.patch kasan-document-kasan_mempool_poison_object.patch kasan-add-return-value-for-kasan_mempool_poison_object.patch kasan-introduce-kasan_mempool_unpoison_object.patch kasan-introduce-kasan_mempool_poison_pages.patch kasan-introduce-kasan_mempool_unpoison_pages.patch kasan-clean-up-__kasan_mempool_poison_object.patch kasan-save-free-stack-traces-for-slab-mempools.patch kasan-clean-up-and-rename-____kasan_kmalloc.patch kasan-introduce-poison_kmalloc_large_redzone.patch kasan-save-alloc-stack-traces-for-mempool.patch mempool-skip-slub_debug-poisoning-when-kasan-is-enabled.patch mempool-use-new-mempool-kasan-hooks.patch mempool-introduce-mempool_use_prealloc_only.patch kasan-add-mempool-tests.patch kasan-rename-pagealloc-tests.patch kasan-reorder-tests.patch kasan-rename-and-document-kasan_unpoison_object_data.patch skbuff-use-mempool-kasan-hooks.patch io_uring-use-mempool-kasan-hook.patch