Re: [net-next v2 1/1] virtual-bus: Implementation of Virtual Bus

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 2019/11/20 下午9:38, Jason Gunthorpe wrote:
On Tue, Nov 19, 2019 at 10:59:20PM -0500, Jason Wang wrote:

The interface between vfio and userspace is
based on virtio which is IMHO much better than
a vendor specific one. userspace stays vendor agnostic.
Why is that even a good thing? It is much easier to provide drivers
via qemu/etc in user space then it is to make kernel upgrades. We've
learned this lesson many times.
For upgrades, since we had a unified interface. It could be done
through:

1) switch the datapath from hardware to software (e.g vhost)
2) unload and load the driver
3) switch teh datapath back

Having drivers in user space have other issues, there're a lot of
customers want to stick to kernel drivers.
So you want to support upgrade of kernel modules, but runtime
upgrading the userspace part is impossible? Seems very strange to me.


Since you're talking about kernel upgrades, so comes such technical possibility.



This is why we have had the philosophy that if it doesn't need to be
in the kernel it should be in userspace.
Let me clarify again. For this framework, it aims to support both
kernel driver and userspce driver. For this series, it only contains
the kernel driver part. What it did is to allow kernel virtio driver
to control vDPA devices. Then we can provide a unified interface for
all of the VM, containers and bare metal. For this use case, I don't
see a way to leave the driver in userspace other than injecting
traffic back through vhost/TAP which is ugly.
Binding to the other kernel virtio drivers is a reasonable
justification, but none of this comes through in the patch cover
letters or patch commit messages.


In the cover letter it had (of course I'm not native speaker but I will try my best to make it more readable for next version).

"
There are hardwares that can do virtio datapath offloading while
having its own control path. This path tries to implement a mdev based
unified API to support using kernel virtio driver to drive those
devices. This is done by introducing a new mdev transport for virtio
(virtio_mdev) and register itself as a new kind of mdev driver. Then
it provides a unified way for kernel virtio driver to talk with mdev
device implementation.

Though the series only contains kernel driver support, the goal is to
make the transport generic enough to support userspace drivers. This
means vhost-mdev[1] could be built on top as well by reusing the
transport.

"



That has lots of security and portability implications and isn't
appropriate for everyone.
This is already using vfio. It doesn't make sense to claim that using
vfio properly is somehow less secure or less portable.

What I find particularly ugly is that this 'IFC VF NIC' driver
pretends to be a mediated vfio device, but actually bypasses all the
mediated device ops for managing dma security and just directly plugs
the system IOMMU for the underlying PCI device into vfio.
Well, VFIO have multiple types of API. The design is to stick the VFIO
DMA model like container work for making DMA API work for userspace
driver.
Well, it doesn't, that model, for security, is predicated on vfio
being the exclusive owner of the device. For instance if the kernel
driver were to perform DMA as well then security would be lost.


It's the responsibility of the kernel mdev driver to preserve the DMA isolation. And it's possible that mdev needs communicate with the master (PF or other) using its own memory, this should be allowed.


to
I suppose this little hack is what is motivating this abuse of vfio in
the first place?

Frankly I think a kernel driver touching a PCI function for which vfio
is now controlling the system iommu for is a violation of the security
model, and I'm very surprised AlexW didn't NAK this idea.

Perhaps it is because none of the patches actually describe how the
DMA security model for this so-called mediated device works? :(

Or perhaps it is because this submission is split up so much it is
hard to see what is being proposed? (I note this IFC driver is the
first user of the mdev_set_iommu_device() function)
Are you objecting the mdev_set_iommu_deivce() stuffs here?
I'm questioning if it fits the vfio PCI device security model, yes.

It is kernel's job to abstract hardware away and present a unified
interface as far as possible.
Sure, you could create a virtio accelerator driver framework in our
new drivers/accel I hear was started. That could make some sense, if
we had HW that actually required/benefited from kernel involvement.
The framework is not designed specifically for your card. It tries to be
generic to support every types of virtio hardware devices, it's not
tied to any bus (e.g PCI) and any vendor. So it's not only a question
of how to slice a PCIE ethernet device.
That doesn't explain why this isn't some new driver subsystem


The vhost-mdev is a vfio-mdev device. It sticks to the VFIO programming model. Any reason to reinvent the wheel?


and
instead treats vfio as a driver multiplexer.


I fail to understand this. VFIO had already support PCI, AP, mdev, and possible other buses (e.g vmbus) in the future. VFIO is not PCI specific, why requires vfio-mdev to be PCI specific?

Thanks


Jason






[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux