Small question if I may regarding kvm_mmu_pin_pages: On 7/9/14, 10:12 PM, mtosatti@xxxxxxxxxx wrote:
+ +static int kvm_mmu_pin_pages(struct kvm_vcpu *vcpu) +{ + struct kvm_pinned_page_range *p; + int r = 1; + + if (is_guest_mode(vcpu)) + return r; + + if (!vcpu->arch.mmu.direct_map) + return r; + + ASSERT(VALID_PAGE(vcpu->arch.mmu.root_hpa)); + + list_for_each_entry(p, &vcpu->arch.pinned_mmu_pages, link) { + gfn_t gfn_offset; + + for (gfn_offset = 0; gfn_offset < p->npages; gfn_offset++) { + gfn_t gfn = p->base_gfn + gfn_offset; + int r; + bool pinned = false; + + r = vcpu->arch.mmu.page_fault(vcpu, gfn << PAGE_SHIFT, + PFERR_WRITE_MASK, false, + true, &pinned);
I understand that the current use-case is for pinning only few pages. Yet, wouldn't it be better (for performance) to check whether the gfn uses a large page and if so to skip forward, increasing gfn_offset to point to the next large page?
Thanks, Nadav -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html