On 8/28/2024 9:44 AM, Mathieu Poirier wrote:
On Mon, 26 Aug 2024 at 11:22, Doug Miller
<doug.miller@xxxxxxxxxxxxxxxxxxxx> wrote:
On 8/26/2024 11:50 AM, Mathieu Poirier wrote:
Apologies for the late reply - this got lost in the vacation email backlog.
On Mon, 26 Aug 2024 at 10:27, Dennis Dalessandro
<dennis.dalessandro@xxxxxxxxxxxxxxxxxxxx> wrote:
On 7/31/24 4:02 PM, Doug Miller wrote:
I am working on SR-IOV support for a new adapter which has shared
resources between the PF and VFs and requires an out-of-band (outside
It would have been a good idea to let people know what "PF" and "VF"
means to avoid confusion.
"PF" refers to the Physical Function of the PCI adapter - that which
exists always, regardless of whether SR-IOV is active. The "VF" refers
to the virtual function(s) that are created when SR-IOV is enabled and
configured. Typically, the VFs and the PF are assigned to different OS
instances running in different VMs. So, the OS that owns the PF needs to
be able to handle resource requests from the OSes that own the VFs (and
also send notifications).
Thank you for the clarification.
the adapter) communication mechanism to manage those resources. I have
been looking at RPMSG as a mechanism to communicate between the driver
on a guest (VM) and the driver on the host OS (which "owns" the
resources). It appears to me that virtio is intended for communication
between guests and host, and RPMSG over virtio is what I want to use.
Virtio is definitely the standard way to convey information between a
host and a guest. You can specify as many virtqueues as needed
(in-band and out-of-band) and it is widely supported. What
information is conveyed by the virtqueues and how it gets conveyed is
entirely up to the use case. Have a look at the specification of
existing virtio drivers to get a better idea [1]. If the driver you
are working with hasn't been standardised, I highly encourage you to
submit a draft for it. If it has then add to the current
specification.
All that said, you could use RPMSG as the protocol that runs on top of
the virtqueues - that should be fairly easy to do.
I had initially started looking at using virtio directly, but it looked
like I was going to have to get a new device ID defined upstream and it
would be a significant effort compared to using an existing facility. I
then saw device ID VIRTIO_ID_RPMSG, which appears to be exactly what
we'd have to create if we were defining a new virtio device for what we
need. However, the problem has been understanding how to write code to
provide the rpmsg "device" side. There does not appear to be any
documentation and there is no example code to follow. It seems that the
device side is typically contained in a GPU or accelerator, which was
not written for a Linux kernel. So I have many questions on how (and
when) to use the interfaces (rpmsg_register_device,
rpmsg_create_channel, rpmsg_create_ept, rpmsg_find_device, ...).
VIRTIO_ID_RPMSG is a special case - it was defined to establish a
communication channel between a main processor (typically a cortex-A)
and a remote processor, something like a M4 or an R5F. As such it is
typically used in conjunction with the "remoteproc" subsystem. The
device side you are looking for is part of the openAMP library [1]. I
am not aware of an implementation of a virtio device that would use
VIRTIO_ID_RPMSG in a MMIO area or a PCI config space to instantiate a
generic message passing interface.
[1]. https://github.com/OpenAMP/open-amp
I had been looking into OpenAMP, but was stuck on the examples which
appear to be running from userland and do not appear to be what is
needed for kernel modules. I'll look at the larger project to see if
there is something I can use.
I had assumed, and obviously could be wrong, that the fact that
VIRTIO_ID_RPMSG was using virtio meant it was suited for VM-host
communications. I was looking at the interfaces that are enabled via
CONFIG_RPMSG and those appeared to be what was needed for the device
side, although it's not clear to me how they are used.
I ran across "pci-hyperv" which looks like it might be providing the
same communications path, but the only use case I'm finding is mlx5 and
I wonder if this is really intended for general (future) use. Also, it's
not clear to me yet just how the host side of that works (yet).
Thanks,
Mathieu
[1]. https://docs.oasis-open.org/virtio/virtio/v1.2/csd01/virtio-v1.2-csd01.html
Can anyone confirm that RPMSG is capable of doing what we need? If so,
I'll need some help figuring out how to use that from kernel device
drivers (I've not been able to find any examples of doing the
service/device side). If not, is there some other facility that is
better suited?
Hi Bjorn and Mathieu, any advice here for Doug? Adding linux-rdma folks as that
is where this will eventually target.
-Denny
External recipient
External recipient