On Fri, Dec 07, 2018 at 06:52:31PM +0000, Jean-Philippe Brucker wrote:
Sorry for the delay, I wanted to do a little more performance analysis
before continuing.
On 27/11/2018 18:10, Michael S. Tsirkin wrote:
On Tue, Nov 27, 2018 at 05:55:20PM +0000, Jean-Philippe Brucker wrote:
+ if (!virtio_has_feature(vdev, VIRTIO_F_VERSION_1) ||
+ !virtio_has_feature(vdev, VIRTIO_IOMMU_F_MAP_UNMAP))
Why bother with a feature bit for this then btw?
We'll need a new feature bit for sharing page tables with the hardware,
because they require different requests (attach_table/invalidate instead
of map/unmap.) A future device supporting page table sharing won't
necessarily need to support map/unmap.
I don't see virtio iommu being extended to support ARM specific
requests. This just won't scale, too many different
descriptor formats out there.
They aren't really ARM specific requests. The two new requests are
ATTACH_TABLE and INVALIDATE, which would be used by x86 IOMMUs as well.
Sharing CPU address space with the HW IOMMU (SVM) has been in the scope
of virtio-iommu since the first RFC, and I've been working with that
extension in mind since the beginning. As an example you can have a look
at my current draft for this [1], which is inspired from the VFIO work
we've been doing with Intel.
The negotiation phase inevitably requires vendor-specific fields in the
descriptors - host tells which formats are supported, guest chooses a
format and attaches page tables. But invalidation and fault reporting
descriptors are fairly generic.
We need to tread carefully here. People expect it that if user does
lspci and sees a virtio device then it's reasonably portable.
If you want to go that way down the road, you should avoid
virtio iommu, instead emulate and share code with the ARM SMMU (probably
with a different vendor id so you can implement the
report on map for devices without PRI).
vSMMU has to stay in userspace though. The main reason we're proposing
virtio-iommu is that emulating every possible vIOMMU model in the kernel
would be unmaintainable. With virtio-iommu we can process the fast path
in the host kernel, through vhost-iommu, and do the heavy lifting in
userspace.
Interesting.
As said above, I'm trying to keep the fast path for
virtio-iommu generic.
More notes on what I consider to be the fast path, and comparison with
vSMMU:
(1) The primary use-case we have in mind for vIOMMU is something like
DPDK in the guest, assigning a hardware device to guest userspace. DPDK
maps a large amount of memory statically, to be used by a pass-through
device. For this case I don't think we care about vIOMMU performance.
Setup and teardown need to be reasonably fast, sure, but the MAP/UNMAP
requests don't have to be optimal.
(2) If the assigned device is owned by the guest kernel, then mappings
are dynamic and require dma_map/unmap() to be fast, but there generally
is no need for a vIOMMU, since device and drivers are trusted by the
guest kernel. Even when the user does enable a vIOMMU for this case
(allowing to over-commit guest memory, which needs to be pinned
otherwise),
BTW that's in theory in practice it doesn't really work.
we generally play tricks like lazy TLBI (non-strict mode) to
make it faster.
Simple lazy TLB for guest/userspace drivers would be a big no no.
You need something smarter.
Here device and drivers are trusted, therefore the
vulnerability window of lazy mode isn't a concern.
If the reason to enable the vIOMMU is over-comitting guest memory
however, you can't use nested translation because it requires pinning
the second-level tables. For this case performance matters a bit,
because your invalidate-on-map needs to be fast, even if you enable lazy
mode and only receive inval-on-unmap every 10ms. It won't ever be as
fast as nested translation, though. For this case I think vSMMU+Caching
Mode and userspace virtio-iommu with MAP/UNMAP would perform similarly
(given page-sized payloads), because the pagetable walk doesn't add a
lot of overhead compared to the context switch. But given the results
below, vhost-iommu would be faster than vSMMU+CM.
(3) Then there is SVM. For SVM, any destructive change to the process
address space requires a synchronous invalidation command to the
hardware (at least when using PCI ATS). Given that SVM is based on page
faults, fault reporting from host to guest also needs to be fast, as
well as fault response from guest to host.
I think this is where performance matters the most. To get a feel of the
advantage we get with virtio-iommu, I compared the vSMMU page-table
sharing implementation [2] and vhost-iommu + VFIO with page table
sharing (based on Tomasz Nowicki's vhost-iommu prototype). That's on a
ThunderX2 with a 10Gb NIC assigned to the guest kernel, which
corresponds to case (2) above, with nesting page tables and without the
lazy mode. The host's only job is forwarding invalidation to the HW SMMU.
vhost-iommu performed on average 1.8x and 5.5x better than vSMMU on
netperf TCP_STREAM and TCP_MAERTS respectively (~200 samples). I think
this can be further optimized (that was still polling under the vq
lock), and unlike vSMMU, virtio-iommu offers the possibility of
multi-queue for improved scalability. In addition, the guest will need
to send both TLB and ATC invalidations with vSMMU, but virtio-iommu
allows to multiplex those, and to invalidate ranges. Similarly for fault
injection, having the ability to report page faults to the guest from
the host kernel should be significantly faster than having to go to
userspace and back to the kernel.
Fascinating. Any data about host CPU utilization?
Eric what do you think?
Is it true that SMMUv3 is fundmentally slow at the architecture level
and so a PV interface will always scale better until
a new hardware interface is designed?