On Tue, Sep 03, 2019 at 12:32:49AM +1000, Daniel Axtens wrote: > Hi Mark, > > >> +static int kasan_depopulate_vmalloc_pte(pte_t *ptep, unsigned long addr, > >> + void *unused) > >> +{ > >> + unsigned long page; > >> + > >> + page = (unsigned long)__va(pte_pfn(*ptep) << PAGE_SHIFT); > >> + > >> + spin_lock(&init_mm.page_table_lock); > >> + > >> + if (likely(!pte_none(*ptep))) { > >> + pte_clear(&init_mm, addr, ptep); > >> + free_page(page); > >> + } > >> + spin_unlock(&init_mm.page_table_lock); > >> + > >> + return 0; > >> +} > > > > There needs to be TLB maintenance after unmapping the page, but I don't > > see that happening below. > > > > We need that to ensure that errant accesses don't hit the page we're > > freeing and that new mappings at the same VA don't cause a TLB conflict > > or TLB amalgamation issue. > > Darn it, I knew there was something I forgot to do! I thought of that > over the weekend, didn't write it down, and then forgot it when I went > to respin the patches. You're totally right. > > > > >> +/* > >> + * Release the backing for the vmalloc region [start, end), which > >> + * lies within the free region [free_region_start, free_region_end). > >> + * > >> + * This can be run lazily, long after the region was freed. It runs > >> + * under vmap_area_lock, so it's not safe to interact with the vmalloc/vmap > >> + * infrastructure. > >> + */ > > > > IIUC we aim to only free non-shared shadow by aligning the start > > upwards, and aligning the end downwards. I think it would be worth > > mentioning that explicitly in the comment since otherwise it's not > > obvious how we handle races between alloc/free. > > > > Oh, I will need to think through that more carefully. > > I think the vmap_area_lock protects us against alloc/free races. AFAICT, on the alloc side we only hold the vmap_area_lock while allocating the area in __get_vm_area_node(), but we don't holding the vmap_area_lock while we populate the page tables for the shadow in kasan_populate_vmalloc(). So I believe that kasan_populate_vmalloc() can race with kasan_release_vmalloc(). > I think alignment operates at least somewhat as you've described, and > while it is important for correctness, I'm not sure I'd say it > prevented races? I will double check my understanding of > vmap_area_lock, and I agree the comment needs to be much clearer. I had assumed that you were trying to only free pages which were definitely not shared (for which there couldn't possibly be a race to allocate), by looking at the sibling areas to see if they potentially overlapped. Was that not the case? Thanks, Mark.