Re: [PATCH 0/2] KVM: MMU: support VMAs that got remap_pfn_range-ed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 07/04/2016 04:45 PM, Xiao Guangrong wrote:


On 07/04/2016 04:41 PM, Neo Jia wrote:
On Mon, Jul 04, 2016 at 04:19:20PM +0800, Xiao Guangrong wrote:


On 07/04/2016 03:53 PM, Neo Jia wrote:
On Mon, Jul 04, 2016 at 03:37:35PM +0800, Xiao Guangrong wrote:


On 07/04/2016 03:03 PM, Neo Jia wrote:
On Mon, Jul 04, 2016 at 02:39:22PM +0800, Xiao Guangrong wrote:


On 06/30/2016 09:01 PM, Paolo Bonzini wrote:
The vGPU folks would like to trap the first access to a BAR by setting
vm_ops on the VMAs produced by mmap-ing a VFIO device.  The fault handler
then can use remap_pfn_range to place some non-reserved pages in the VMA.

Why does it require fetching the pfn when the fault is triggered rather
than when mmap() is called?

Hi Guangrong,

as such mapping information between virtual mmio to physical mmio is only available
at runtime.

Sorry, i do not know what the different between mmap() and the time VM actually
accesses the memory for your case. Could you please more detail?

Hi Guangrong,

Sure. The mmap() gets called by qemu or any VFIO API userspace consumer when
setting up the virtual mmio, at that moment nobody has any knowledge about how
the physical mmio gets virtualized.

When the vm (or application if we don't want to limit ourselves to vmm term)
starts, the virtual and physical mmio gets mapped by mpci kernel module with the
help from vendor supplied mediated host driver according to the hw resource
assigned to this vm / application.

Thanks for your expiation.

It sounds like a strategy of resource allocation, you delay the allocation until VM really
accesses it, right?

Yes, that is where the fault handler inside mpci code comes to the picture.


I am not sure this strategy is good. The instance is successfully created, and it is started
successful, but the VM is crashed due to the resource of that instance is not enough. That sounds
unreasonable.


Especially, you can not squeeze this kind of memory to balance the usage between all VMs. Does
this strategy still make sense?


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux