Re: [PATCH 3/5] vDPA: introduce vDPA bus

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 2020/1/19 下午5:07, Shahaf Shuler wrote:
Friday, January 17, 2020 4:13 PM, Rob Miller:
Subject: Re: [PATCH 3/5] vDPA: introduce vDPA bus
On 2020/1/17 下午8:13, Michael S. Tsirkin wrote:
On Thu, Jan 16, 2020 at 08:42:29PM +0800, Jason Wang wrote:
[...]

+ * @set_map:                        Set device memory mapping, optional
+ *                          and only needed for device that using
+ *                          device specific DMA translation
+ *                          (on-chip IOMMU)
+ *                          @vdev: vdpa device
+ *                          @iotlb: vhost memory mapping to be
+ *                          used by the vDPA
+ *                          Returns integer: success (0) or error (< 0)
OK so any change just swaps in a completely new mapping?
Wouldn't this make minor changes such as memory hotplug
quite expensive?
What is the concern? Traversing the rb tree or fully replace the on-chip IOMMU translations?
If the latest, then I think we can take such optimization on the driver level (i.e. to update only the diff between the two mapping).


This is similar to the design of platform IOMMU part of vhost-vdpa. We decide to send diffs to platform IOMMU there. If it's ok to do that in driver, we can replace set_map with incremental API like map()/unmap().

Then driver need to maintain rbtree itself.


If the first one, then I think memory hotplug is a heavy flow regardless. Do you think the extra cycles for the tree traverse will be visible in any way?


I think if the driver can pause the DMA during the time for setting up new mapping, it should be fine.


My understanding is that the incremental updating of the on chip IOMMU
may degrade the  performance. So vendor vDPA drivers may want to know
all the mappings at once.
Yes exact. For Mellanox case for instance many optimization can be performed on a given memory layout.

Technically, we can keep the incremental API
here and let the vendor vDPA drivers to record the full mapping
internally which may slightly increase the complexity of vendor driver.
What will be the trigger for the driver to know it received the last mapping on this series and it can now push it to the on-chip IOMMU?


For GPA->HVA(HPA) mapping, we can have flag for this.

But for GIOVA_>HVA(HPA) mapping which could be changed by guest, it looks to me there's no concept of "last mapping" there. I guess in this case, mappings needs to be set from the ground. This could be expensive but consider most application uses static mappings (e.g dpdk in guest). It should be ok.

Thanks



We need more inputs from vendors here.

Thanks


_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linuxfoundation.org/mailman/listinfo/virtualization




[Index of Archives]     [KVM Development]     [Libvirt Development]     [Libvirt Users]     [CentOS Virtualization]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux