On Mon, Jul 29, 2019 at 12:15 PM Daniel Axtens <dja@xxxxxxxxxx> wrote: > > Hi Dmitry, > > Thanks for the feedback! > > >> + addr = shadow_alloc_start; > >> + do { > >> + pgdp = pgd_offset_k(addr); > >> + p4dp = p4d_alloc(&init_mm, pgdp, addr); > > > > Page table allocations will be protected by mm->page_table_lock, right? > > Yes, each of those alloc functions take the lock if they end up in the > slow-path that does the actual allocation (e.g. __p4d_alloc()). > > >> + pudp = pud_alloc(&init_mm, p4dp, addr); > >> + pmdp = pmd_alloc(&init_mm, pudp, addr); > >> + ptep = pte_alloc_kernel(pmdp, addr); > >> + > >> + /* > >> + * we can validly get here if pte is not none: it means we > >> + * allocated this page earlier to use part of it for another > >> + * allocation > >> + */ > >> + if (pte_none(*ptep)) { > >> + backing = __get_free_page(GFP_KERNEL); > >> + backing_pte = pfn_pte(PFN_DOWN(__pa(backing)), > >> + PAGE_KERNEL); > >> + set_pte_at(&init_mm, addr, ptep, backing_pte); > >> + } > >> + } while (addr += PAGE_SIZE, addr != shadow_alloc_end); > >> + > >> + requested_size = round_up(requested_size, KASAN_SHADOW_SCALE_SIZE); > >> + kasan_unpoison_shadow(area->addr, requested_size); > >> + kasan_poison_shadow(area->addr + requested_size, > >> + area->size - requested_size, > >> + KASAN_VMALLOC_INVALID); > > > > > > Do I read this correctly that if kernel code does vmalloc(64), they > > will have exactly 64 bytes available rather than full page? To make > > sure: vmalloc does not guarantee that the available size is rounded up > > to page size? I suspect we will see a throw out of new bugs related to > > OOBs on vmalloc memory. So I want to make sure that these will be > > indeed bugs that we agree need to be fixed. > > I am sure there will be bugs where the size is controlled by > > user-space, so these are bad bugs under any circumstances. But there > > will also probably be OOBs, where people will try to "prove" that > > that's fine and will work (just based on our previous experiences :)). > > So the implementation of vmalloc will always round it up. The > description of the function reads, in part: > > * Allocate enough pages to cover @size from the page level > * allocator and map them into contiguous kernel virtual space. > > So in short it's not quite clear - you could argue that you have a > guarantee that you get full pages, but you could also argue that you've > specifically asked for @size bytes and @size bytes only. > > So far it seems that users are well behaved in terms of using the amount > of memory they ask for, but you'll get a better idea than me very > quickly as I only tested with trinity. :) Ack. Let's try and see then. There is always an easy fix -- round up size explicitly before vmalloc, which will make the code more explicit and clear. I can hardly see any potential downsides for rounding up the size explicitly. > I also handle vmap - for vmap there's no way to specify sub-page > allocations so you get as many pages as you ask for. > > > On impl side: kasan_unpoison_shadow seems to be capable of handling > > non-KASAN_SHADOW_SCALE_SIZE-aligned sizes exactly in the way we want. > > So I think it's better to do: > > > > kasan_unpoison_shadow(area->addr, requested_size); > > requested_size = round_up(requested_size, KASAN_SHADOW_SCALE_SIZE); > > kasan_poison_shadow(area->addr + requested_size, > > area->size - requested_size, > > KASAN_VMALLOC_INVALID); > > Will do for v2. > > Regards, > Daniel > > -- > You received this message because you are subscribed to the Google Groups "kasan-dev" group. > To unsubscribe from this group and stop receiving emails from it, send an email to kasan-dev+unsubscribe@xxxxxxxxxxxxxxxx. > To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/87blxdgn9k.fsf%40dja-thinkpad.axtens.net.