On Fri, Oct 12, 2018 at 7:29 AM Juergen Gross <jgross@xxxxxxxx> wrote: > On 12/10/2018 05:21, Jann Horn wrote: > > +cc xen maintainers and kvm folks > > > > On Fri, Oct 12, 2018 at 4:40 AM Joel Fernandes (Google) > > <joel@xxxxxxxxxxxxxxxxx> wrote: > >> Android needs to mremap large regions of memory during memory management > >> related operations. The mremap system call can be really slow if THP is > >> not enabled. The bottleneck is move_page_tables, which is copying each > >> pte at a time, and can be really slow across a large map. Turning on THP > >> may not be a viable option, and is not for us. This patch speeds up the > >> performance for non-THP system by copying at the PMD level when possible. > > [...] > >> +bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr, > >> + unsigned long new_addr, unsigned long old_end, > >> + pmd_t *old_pmd, pmd_t *new_pmd, bool *need_flush) > >> +{ > > [...] > >> + /* > >> + * We don't have to worry about the ordering of src and dst > >> + * ptlocks because exclusive mmap_sem prevents deadlock. > >> + */ > >> + old_ptl = pmd_lock(vma->vm_mm, old_pmd); > >> + if (old_ptl) { > >> + pmd_t pmd; > >> + > >> + new_ptl = pmd_lockptr(mm, new_pmd); > >> + if (new_ptl != old_ptl) > >> + spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING); > >> + > >> + /* Clear the pmd */ > >> + pmd = *old_pmd; > >> + pmd_clear(old_pmd); > >> + > >> + VM_BUG_ON(!pmd_none(*new_pmd)); > >> + > >> + /* Set the new pmd */ > >> + set_pmd_at(mm, new_addr, new_pmd, pmd); > >> + if (new_ptl != old_ptl) > >> + spin_unlock(new_ptl); > >> + spin_unlock(old_ptl); > > > > How does this interact with Xen PV? From a quick look at the Xen PV > > integration code in xen_alloc_ptpage(), it looks to me as if, in a > > config that doesn't use split ptlocks, this is going to temporarily > > drop Xen's type count for the page to zero, causing Xen to de-validate > > and then re-validate the L1 pagetable; if you first set the new pmd > > before clearing the old one, that wouldn't happen. I don't know how > > this interacts with shadow paging implementations. > > No, this isn't an issue. As the L1 pagetable isn't being released it > will stay pinned, so there will be no need to revalidate it. Where exactly is the L1 pagetable pinned? xen_alloc_ptpage() does: if (static_branch_likely(&xen_struct_pages_ready)) SetPagePinned(page); if (!PageHighMem(page)) { xen_mc_batch(); __set_pfn_prot(pfn, PAGE_KERNEL_RO); if (level == PT_PTE && USE_SPLIT_PTE_PTLOCKS) __pin_pagetable_pfn(MMUEXT_PIN_L1_TABLE, pfn); xen_mc_issue(PARAVIRT_LAZY_MMU); } else { /* make sure there are no stray mappings of this page */ kmap_flush_unused(); } which means that if USE_SPLIT_PTE_PTLOCKS is false, the table doesn't get pinned and only stays typed as long as it is referenced by an L2 table, right?