Re: [PATCH v7 04/11] KVM: MMU: zap pages in batch

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, May 27, 2013 at 10:20:12AM +0800, Xiao Guangrong wrote:
> On 05/25/2013 04:34 AM, Marcelo Tosatti wrote:
> > On Thu, May 23, 2013 at 03:55:53AM +0800, Xiao Guangrong wrote:
> >> Zap at lease 10 pages before releasing mmu-lock to reduce the overload
> >> caused by requiring lock
> >>
> >> After the patch, kvm_zap_obsolete_pages can forward progress anyway,
> >> so update the comments
> >>
> >> [ It improves kernel building 0.6% ~ 1% ]
> > 
> > Can you please describe the overload in more detail? Under what scenario
> > is kernel building improved?
> 
> Yes.
> 
> The scenario is we do kernel building, meanwhile, repeatedly read PCI rom
> every one second.
> 
> [
>    echo 1 > /sys/bus/pci/devices/0000\:00\:03.0/rom
>    cat /sys/bus/pci/devices/0000\:00\:03.0/rom > /dev/null
> ]

Can't see why it reflects real world scenario (or a real world
scenario with same characteristics regarding kvm_mmu_zap_all vs faults)?

Point is, it would be good to understand why this change 
is improving performance? What are these cases where breaking out of
kvm_mmu_zap_all due to either (need_resched || spin_needbreak) on zapped
< 10 ?



--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux