Re: [PATCH v1 15/16] khwasan, mm, arm64: tag non slab memory allocated via pagealloc

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, May 15, 2018 at 4:06 PM, Andrey Ryabinin
<aryabinin@xxxxxxxxxxxxx> wrote:
>
> You could avoid 'if (!PageSlab())' check by adding page_kasan_tag_reset() into kasan_poison_slab().

>> @@ -526,6 +526,7 @@ struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align,
>>       }
>>
>>       trace_cma_alloc(pfn, page, count, align);
>> +     page_kasan_tag_reset(page);
>
>
> Why? Comment needed.

> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index b8e0a8215021..f9f2181164a2 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -207,18 +207,11 @@ void kasan_unpoison_stack_above_sp_to(const void *watermark)
>
>  void kasan_alloc_pages(struct page *page, unsigned int order)
>  {
> -#ifdef CONFIG_KASAN_GENERIC
> -       if (likely(!PageHighMem(page)))
> -               kasan_unpoison_shadow(page_address(page), PAGE_SIZE << order);
> -#else
> -       if (!PageSlab(page)) {
> -               u8 tag = random_tag();
> +       if (unlikely(PageHighMem(page)))
> +               return;
>
> -               kasan_poison_shadow(page_address(page), PAGE_SIZE << order,
> -                                       tag);
> -               page_kasan_tag_set(page, tag);
> -       }
> -#endif
> +       page_kasan_tag_set(page, random_tag());
> +       kasan_unpoison_shadow(page_address(page), PAGE_SIZE << order);
>  }
>
>  void kasan_free_pages(struct page *page, unsigned int order)

> As already said before no changes needed in kasan_kmalloc_large. kasan_alloc_pages() alredy did tag_set().

Will fix all in v2, thanks!




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux