Re: [PATCH v2 2/4] mm: kasan: Skip unpoisoning of user pages

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jun 10, 2022 at 5:21 PM Catalin Marinas <catalin.marinas@xxxxxxx> wrote:
>
> Commit c275c5c6d50a ("kasan: disable freed user page poisoning with HW
> tags") added __GFP_SKIP_KASAN_POISON to GFP_HIGHUSER_MOVABLE. A similar
> argument can be made about unpoisoning, so also add
> __GFP_SKIP_KASAN_UNPOISON to user pages. To ensure the user page is
> still accessible via page_address() without a kasan fault, reset the
> page->flags tag.
>
> With the above changes, there is no need for the arm64
> tag_clear_highpage() to reset the page->flags tag.
>
> Signed-off-by: Catalin Marinas <catalin.marinas@xxxxxxx>
> Cc: Andrey Ryabinin <ryabinin.a.a@xxxxxxxxx>
> Cc: Andrey Konovalov <andreyknvl@xxxxxxxxx>
> Cc: Peter Collingbourne <pcc@xxxxxxxxxx>
> Cc: Vincenzo Frascino <vincenzo.frascino@xxxxxxx>
> ---
>  arch/arm64/mm/fault.c | 1 -
>  include/linux/gfp.h   | 2 +-
>  mm/page_alloc.c       | 7 +++++--
>  3 files changed, 6 insertions(+), 4 deletions(-)
>
> diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
> index c5e11768e5c1..cdf3ffa0c223 100644
> --- a/arch/arm64/mm/fault.c
> +++ b/arch/arm64/mm/fault.c
> @@ -927,6 +927,5 @@ struct page *alloc_zeroed_user_highpage_movable(struct vm_area_struct *vma,
>  void tag_clear_highpage(struct page *page)
>  {
>         mte_zero_clear_page_tags(page_address(page));
> -       page_kasan_tag_reset(page);
>         set_bit(PG_mte_tagged, &page->flags);
>  }
> diff --git a/include/linux/gfp.h b/include/linux/gfp.h
> index 2d2ccae933c2..0ace7759acd2 100644
> --- a/include/linux/gfp.h
> +++ b/include/linux/gfp.h
> @@ -348,7 +348,7 @@ struct vm_area_struct;
>  #define GFP_DMA32      __GFP_DMA32
>  #define GFP_HIGHUSER   (GFP_USER | __GFP_HIGHMEM)
>  #define GFP_HIGHUSER_MOVABLE   (GFP_HIGHUSER | __GFP_MOVABLE | \
> -                        __GFP_SKIP_KASAN_POISON)
> +                        __GFP_SKIP_KASAN_POISON | __GFP_SKIP_KASAN_UNPOISON)
>  #define GFP_TRANSHUGE_LIGHT    ((GFP_HIGHUSER_MOVABLE | __GFP_COMP | \
>                          __GFP_NOMEMALLOC | __GFP_NOWARN) & ~__GFP_RECLAIM)
>  #define GFP_TRANSHUGE  (GFP_TRANSHUGE_LIGHT | __GFP_DIRECT_RECLAIM)
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index e008a3df0485..f6ed240870bc 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2397,6 +2397,7 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
>         bool init = !want_init_on_free() && want_init_on_alloc(gfp_flags) &&
>                         !should_skip_init(gfp_flags);
>         bool init_tags = init && (gfp_flags & __GFP_ZEROTAGS);
> +       int i;
>
>         set_page_private(page, 0);
>         set_page_refcounted(page);
> @@ -2422,8 +2423,6 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
>          * should be initialized as well).
>          */
>         if (init_tags) {
> -               int i;
> -
>                 /* Initialize both memory and tags. */
>                 for (i = 0; i != 1 << order; ++i)
>                         tag_clear_highpage(page + i);
> @@ -2438,6 +2437,10 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
>                 /* Note that memory is already initialized by KASAN. */
>                 if (kasan_has_integrated_init())
>                         init = false;
> +       } else {
> +               /* Ensure page_address() dereferencing does not fault. */
> +               for (i = 0; i != 1 << order; ++i)
> +                       page_kasan_tag_reset(page + i);
>         }
>         /* If memory is still not initialized, do it now. */
>         if (init)

Reviewed-by: Andrey Konovalov <andreyknvl@xxxxxxxxx>




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux