Il 22/09/2014 23:54, Andres Lagar-Cavilla ha scritto: > @@ -1406,32 +1406,24 @@ static int kvm_age_rmapp(struct kvm *kvm, unsigned long *rmapp, > struct rmap_iterator uninitialized_var(iter); > int young = 0; > > - /* > - * In case of absence of EPT Access and Dirty Bits supports, > - * emulate the accessed bit for EPT, by checking if this page has > - * an EPT mapping, and clearing it if it does. On the next access, > - * a new EPT mapping will be established. > - * This has some overhead, but not as much as the cost of swapping > - * out actively used pages or breaking up actively used hugepages. > - */ > - if (!shadow_accessed_mask) { > - young = kvm_unmap_rmapp(kvm, rmapp, slot, data); > - goto out; > - } > + BUG_ON(!shadow_accessed_mask); > > for (sptep = rmap_get_first(*rmapp, &iter); sptep; > sptep = rmap_get_next(&iter)) { > + struct kvm_mmu_page *sp; > + gfn_t gfn; > BUG_ON(!is_shadow_present_pte(*sptep)); > + /* From spte to gfn. */ > + sp = page_header(__pa(sptep)); > + gfn = kvm_mmu_page_get_gfn(sp, sptep - sp->spt); > > if (*sptep & shadow_accessed_mask) { > young = 1; > clear_bit((ffs(shadow_accessed_mask) - 1), > (unsigned long *)sptep); > } > + trace_kvm_age_page(gfn, slot, young); Yesterday I couldn't think of a way to avoid the page_header/kvm_mmu_page_get_gfn on every iteration, but it's actually not hard. Instead of passing hva as datum, you can pass (unsigned long) &start. Then you can add PAGE_SIZE to it at the end of every call to kvm_age_rmapp, and keep the old tracing logic. Paolo -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html