Re: [PATCH 1/3] kasan: support backing vmalloc space with real shadow memory

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Dmitry,

Thanks for the feedback!

>> +       addr = shadow_alloc_start;
>> +       do {
>> +               pgdp = pgd_offset_k(addr);
>> +               p4dp = p4d_alloc(&init_mm, pgdp, addr);
>
> Page table allocations will be protected by mm->page_table_lock, right?

Yes, each of those alloc functions take the lock if they end up in the
slow-path that does the actual allocation (e.g. __p4d_alloc()).

>> +               pudp = pud_alloc(&init_mm, p4dp, addr);
>> +               pmdp = pmd_alloc(&init_mm, pudp, addr);
>> +               ptep = pte_alloc_kernel(pmdp, addr);
>> +
>> +               /*
>> +                * we can validly get here if pte is not none: it means we
>> +                * allocated this page earlier to use part of it for another
>> +                * allocation
>> +                */
>> +               if (pte_none(*ptep)) {
>> +                       backing = __get_free_page(GFP_KERNEL);
>> +                       backing_pte = pfn_pte(PFN_DOWN(__pa(backing)),
>> +                                             PAGE_KERNEL);
>> +                       set_pte_at(&init_mm, addr, ptep, backing_pte);
>> +               }
>> +       } while (addr += PAGE_SIZE, addr != shadow_alloc_end);
>> +
>> +       requested_size = round_up(requested_size, KASAN_SHADOW_SCALE_SIZE);
>> +       kasan_unpoison_shadow(area->addr, requested_size);
>> +       kasan_poison_shadow(area->addr + requested_size,
>> +                           area->size - requested_size,
>> +                           KASAN_VMALLOC_INVALID);
>
>
> Do I read this correctly that if kernel code does vmalloc(64), they
> will have exactly 64 bytes available rather than full page? To make
> sure: vmalloc does not guarantee that the available size is rounded up
> to page size? I suspect we will see a throw out of new bugs related to
> OOBs on vmalloc memory. So I want to make sure that these will be
> indeed bugs that we agree need to be fixed.
> I am sure there will be bugs where the size is controlled by
> user-space, so these are bad bugs under any circumstances. But there
> will also probably be OOBs, where people will try to "prove" that
> that's fine and will work (just based on our previous experiences :)).

So the implementation of vmalloc will always round it up. The
description of the function reads, in part:

 * Allocate enough pages to cover @size from the page level
 * allocator and map them into contiguous kernel virtual space.

So in short it's not quite clear - you could argue that you have a
guarantee that you get full pages, but you could also argue that you've
specifically asked for @size bytes and @size bytes only.

So far it seems that users are well behaved in terms of using the amount
of memory they ask for, but you'll get a better idea than me very
quickly as I only tested with trinity. :)

I also handle vmap - for vmap there's no way to specify sub-page
allocations so you get as many pages as you ask for.

> On impl side: kasan_unpoison_shadow seems to be capable of handling
> non-KASAN_SHADOW_SCALE_SIZE-aligned sizes exactly in the way we want.
> So I think it's better to do:
>
>        kasan_unpoison_shadow(area->addr, requested_size);
>        requested_size = round_up(requested_size, KASAN_SHADOW_SCALE_SIZE);
>        kasan_poison_shadow(area->addr + requested_size,
>                            area->size - requested_size,
>                            KASAN_VMALLOC_INVALID);

Will do for v2.

Regards,
Daniel




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux