Re: [PATCH v3 5/9] KVM: MMU: introduce SPTE_WRITE_PROTECT bit

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 20 Apr 2012 21:55:55 -0300
Marcelo Tosatti <mtosatti@xxxxxxxxxx> wrote:

> More importantly than the particular flush TLB case, the point is
> every piece of code that reads and writes sptes must now be aware that
> mmu_lock alone does not guarantee stability. Everything must be audited.

In addition, please give me some stress-test cases to verify these in the
real environments.  Live migration with KSM, with notifier call, etc?

Although the current logic is verified by dirty-log api test, the new logic
may need another api test program.

Note: the problem is that live migration can fail silently.  We cannot know
the data loss is from guest side problem or get_dirty side.

> Where the bulk of the improvement comes from again? If there is little
> or no mmu_lock contention (which we have no consistent data to be honest
> in your testcase) is the bouncing off mmu_lock's cacheline that hurts?

This week, I was doing some simplified "worst-latency-tests" for my work.
It was difficult than I thought.

But Xiao's "lock-less" should see the reduction of mmu_lock contention
more easily, if there is really some.

To make things simple, e.g., we can do the same kind of write-loop as
XBZRLE people are doing in the guest - with more VCPUs if possible.

Thanks,
	Takuya
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux