On 9/19/2024 7:26 AM, Doug Miller wrote:
I am working on adding SR-IOV for a new adapter and need to find a way to communicate between guest and host drivers without using the adapter hardware. I have been looking at RPMSG-over-VIRTIO as a way to this, but have not been able to figure out how the host would setup the RPMSG device needed for this. I have seen at least one bug in virtio_rpmsg_bus that was discovered and fixed by qemu developers, and so am hoping there may be some experience with rpmsg here that can help. I see an example in the Linux kernel for using rpmsg from the guest (client) side, in samples/rpmsg/rpmsg_client_sample.c. What I'm having difficulty with is finding examples or documentation on how to do the host side. I have heard that the VMMs may also play a role in setting this up, or are doing something similar, but so far I am not able to find code examples in qemu or libvert. Any help would be appreciated, Doug
I see a comment in drivers/pci/controller/pci-hyperv.c that describes what I'm looking, but what I need must work for any (independently of) hypervisor/VMM. "Hyper-V SR-IOV provides a backchannel mechanism in software for communication between a VF driver and a PF driver..." It certainly seems like rpmsg-over-virtio would accomplish this, but even some other facility that is VMM-independent would be fine. I'm looking for any sort of solution that does not require (major) modification of the kernel. Can anyone state authoritatively that rpmsg-over-virtio is not capable of this? Is there something like this backchannel offered in the linux kernel? Is there something else that can be used to accomplish this PF-VF communication using existing kernel facilities? External recipient