Hi Christoffer, On 4 June 2014 16:15, Christoffer Dall <christoffer.dall@xxxxxxxxxx> wrote: > unmap_range() was utterly broken, to quote Marc, and broke in all sorts > of situations. It was also quite complicated to follow and didn't > follow the usual scheme of having a separate iterating function for each > level of page tables. > > Address this by refactoring the code and introduce a pgd_clear() > function. > > Tested on TC2 with/without THP and limited testing on the v8 Foundation > Model. > > Reviewed-by: Jungseok Lee <jays.lee@xxxxxxxxxxx> > Reviewed-by: Mario Smarduch <m.smarduch@xxxxxxxxxxx> > Acked-by: Marc Zyngier <marc.zyngier@xxxxxxx> > Signed-off-by: Christoffer Dall <christoffer.dall@xxxxxxxxxx> > --- This looks good to me. Reviewed-by: Steve Capper <steve.capper@xxxxxxxxxx> One minor comment below (sorry just spotted this now)... [ ... ] > -static void clear_pte_entry(struct kvm *kvm, pte_t *pte, phys_addr_t addr) > +static void unmap_ptes(struct kvm *kvm, pmd_t *pmd, > + phys_addr_t addr, phys_addr_t end) > { > - if (pte_present(*pte)) { > - kvm_set_pte(pte, __pte(0)); > - put_page(virt_to_page(pte)); > - kvm_tlb_flush_vmid_ipa(kvm, addr); > - } > + pte_t *pte, *start_pte; > + unsigned long long start_addr = addr; > + > + start_pte = pte = pte_offset_kernel(pmd, addr); > + do { > + if (!pte_none(*pte)) { > + kvm_set_pte(pte, __pte(0)); > + put_page(virt_to_page(pte)); > + kvm_tlb_flush_vmid_ipa(kvm, addr); Can this hyp call be expensive if a lot of ptes are being unmapped (for 64K pages we can have 8192 ptes per page)? If so, can they be batched together? Cheers, -- Steve _______________________________________________ kvmarm mailing list kvmarm@xxxxxxxxxxxxxxxxxxxxx https://lists.cs.columbia.edu/mailman/listinfo/kvmarm