Re: [PATCH 4/4] KVM: Optimize dirty logging by rmap_write_protect()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 11/14/2011 11:24 AM, Takuya Yoshikawa wrote:
> Currently, write protecting a slot needs to walk all the shadow pages
> and checks ones which have a pte mapping a page in it.
>
> The walk is overly heavy when dirty pages in that slot are not so many
> and checking the shadow pages would result in unwanted cache pollution.
>
> To mitigate this problem, we use rmap_write_protect() and check only
> the sptes which can be reached from gfns marked in the dirty bitmap
> when the number of dirty pages are less than that of shadow pages.
>
> This criterion is reasonable in its meaning and worked well in our test:
> write protection became some times faster than before when the ratio of
> dirty pages are low and was not worse even when the ratio was near the
> criterion.
>
> Note that the locking for this write protection becomes fine grained.
> The reason why this is safe is descripted in the comments.
>
>  
> +/**
> + * write_protect_slot - write protect a slot for dirty logging
> + * @kvm: the kvm instance
> + * @memslot: the slot we protect
> + * @dirty_bitmap: the bitmap indicating which pages are dirty
> + * @nr_dirty_pages: the number of dirty pages
> + *
> + * We have two ways to find all sptes to protect:
> + * 1. Use kvm_mmu_slot_remove_write_access() which walks all shadow pages and
> + *    checks ones that have a spte mapping a page in the slot.
> + * 2. Use kvm_mmu_rmap_write_protect() for each gfn found in the bitmap.
> + *
> + * Generally speaking, if there are not so many dirty pages compared to the
> + * number of shadow pages, we should use the latter.
> + *
> + * Note that letting others write into a page marked dirty in the old bitmap
> + * by using the remaining tlb entry is not a problem.  That page will become
> + * write protected again when we flush the tlb and then be reported dirty to
> + * the user space by copying the old bitmap.
> + */
> +static void write_protect_slot(struct kvm *kvm,
> +			       struct kvm_memory_slot *memslot,
> +			       unsigned long *dirty_bitmap,
> +			       unsigned long nr_dirty_pages)
> +{
> +	/* Not many dirty pages compared to # of shadow pages. */
> +	if (nr_dirty_pages < kvm->arch.n_used_mmu_pages) {

Seems a reasonable heuristic.  In particular, this is always true for
vga, yes?  That will get the code exercised.

> +		unsigned long gfn_offset;
> +
> +		for_each_set_bit(gfn_offset, dirty_bitmap, memslot->npages) {
> +			unsigned long gfn = memslot->base_gfn + gfn_offset;
> +
> +			spin_lock(&kvm->mmu_lock);
> +			kvm_mmu_rmap_write_protect(kvm, gfn, memslot);
> +			spin_unlock(&kvm->mmu_lock);
> +		}
> +		kvm_flush_remote_tlbs(kvm);
> +	} else {
> +		spin_lock(&kvm->mmu_lock);
> +		kvm_mmu_slot_remove_write_access(kvm, memslot->id);
> +		spin_unlock(&kvm->mmu_lock);
> +	}
> +}
> +
>

-- 
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux