Re: [PATCH v4 06/10] KVM: MMU: fast path of handling guest page fault

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 05/03/2012 05:10 AM, Marcelo Tosatti wrote:

> On Wed, May 02, 2012 at 01:39:51PM +0800, Xiao Guangrong wrote:
>> On 04/29/2012 04:50 PM, Takuya Yoshikawa wrote:
>>
>>> On Fri, 27 Apr 2012 11:52:13 -0300
>>> Marcelo Tosatti <mtosatti@xxxxxxxxxx> wrote:
>>>
>>>> Yes but the objective you are aiming for is to read and write sptes
>>>> without mmu_lock. That is, i am not talking about this patch. 
>>>> Please read carefully the two examples i gave (separated by "example)").
>>>
>>> The real objective is not still clear.
>>>
>>> The ~10% improvement reported before was on macro benchmarks during live
>>> migration.  At least, that optimization was the initial objective.
>>>
>>> But at some point, the objective suddenly changed to "lock-less" without
>>> understanding what introduced the original improvement.
>>>
>>> Was the problem really mmu_lock contention?
>>>
>>
>>
>> Takuya, i am so tired to argue the advantage of lockless write-protect
>> and lockless O(1) dirty-log again and again.
> 
> His point is valid: there is a lack of understanding on the details of
> the improvement.
> 


Actually, the improvement of lockless is that it can let vcpu to be parallel
as possible.

>From the test result, lockless gains little improvement for unix-migration,
in this case, the vcpus are almost idle (at least not busy).

The large improvement is from dbench-migration, in this case, all vcpus are
busy accessing memory which is write-protected by dirty-log. If you enable
page-fault/fast-page-fault tracepoints, you can see huge number of page fault
from different vcpu during the migration.

> Did you see the pahole output on struct kvm? Apparently mmu_lock is
> sharing a cacheline with read-intensive memslots pointer. It would be 
> interesting to see what are the effects of cacheline aligning mmu_lock.
> 


Yes, i see that. In my test .config, i have enabled
CONFIG_DEBUG_SPINLOCK/CONFIG_DEBUG_LOCK_ALLOC, mmu-lock is not sharing cacheline
with memslots. That means it is not a problem during my test.
(BTW, pahole can not work on my box, it shows:
......
DW_AT_<0x3c>=0x19
DW_AT_<0x3c>=0x19
DW_AT_<0x3c>=0x19
die__process_function: DW_TAG_INVALID (0x4109) @ <0x12886> not handled!
)

If we reorganize 'struct kvm', i guess it is good for kvm but it can not improve
too much for migration. :)


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux