On 9/11/2024 11:12 AM, Mathieu Poirier wrote:
On Tue, 10 Sept 2024 at 09:43, Doug Miller
<doug.miller@xxxxxxxxxxxxxxxxxxxx> wrote:
On 9/10/2024 10:13 AM, Mathieu Poirier wrote:
On Tue, Sep 10, 2024 at 08:12:07AM -0500, Doug Miller wrote:
On 9/3/2024 10:52 AM, Doug Miller wrote:
I am trying to learn how to create an RPMSG-over-VIRTIO device
(service) in order to perform communication between a host driver and
a guest driver. The RPMSG-over-VIRTIO driver (client) side is fairly
well documented and there is a good example (starting point, at least)
in samples/rpmsg/rpmsg_client_sample.c.
I see that I can create an endpoint (struct rpmsg_endpoint) using
rpmsg_create_ept(), and from there I can use rpmsg_send() et al. and
the rpmsg_rx_cb_t cb to perform the communications. However, this
requires a struct rpmsg_device and it is not clear just how to get one
that is suitable for this purpose.
It appears that one or both of rpmsg_create_channel() and
rpmsg_register_device() are needed in order to obtain a device for the
specific host-guest communications channel. At some point, a "root"
device is needed that will use virtio (VIRTIO_ID_RPMSG) such that new
subdevices can be created for each host-guest pair.
In addition, building a kernel with CONFIG_RPMSG, CONFIG_RPMSG_VIRTIO,
and CONFIG_RPMSG_NS set, and doing a modprobe virtio_rpmsg_bus, seems
to get things setup but that does not result in creation of any "root"
rpmsg-over-virtio device. Presumably, any such device would have to be
setup to use a specific range of addresses and also be tied to
virtio_rpmsg_bus to ensure that virtio is used.
It is also not clear if/how register_rpmsg_driver() will be required
on the rpmsg driver side, even though the sample code does not use it.
So, first questions are:
* Am I looking at the correct interfaces in order to create the host
rpmsg device side?
* What needs to be done to get a "root" rpmsg-over-virtio device
created (if required)?
* How is a rpmsg-over-virtio device created for each host-guest driver
pair, for use with rpmsg_create_ept()?
* Does the guest side (rpmsg driver) require any special handling to
plug-in to the host driver (rpmsg device) side? Aside from using the
correct addresses to match device side.
It looks to me as though the virtio_rpmsg_bus module can create a
"rpmsg_ctl" device, which could be used to create channels from which
endpoints could be created. However, when I load the virtio_rpmsg_bus,
rpmsg_ns, and rpmsg_core modules there is no "rpmsg_ctl" device created
(this is running in the host OS, before any VMs are created/run).
At this time the modules stated above are all used when a main processor is
controlling a remote processor, i.e via the remoteproc subsystem. I do not know
of an implementation where VIRTIO_ID_RPMSG is used in the context of a
host/guest scenario. As such you will find yourself in uncharted territory.
At some point there were discussion via the OpenAMP body to standardize the
remoteproc's subsystem establishment of virtqueues to conform to a host/guest
scenario but was abandonned. That would have been a step in the right direction
for what you are trying to do.
I was looking at some existing rpmsg code, at it appeared to me that
some adapters, like the "qcom", are creating an rpmsg device that
provides specialized methods for talking to the remote processor(s). I
have assumed this is because that hardware does not allow for running
something remotely that can utilize the virtio queues directly, and so
these rpmsg devices provide code to do the communication with their
hardware. What's not clear is whether these devices are using
rpmsg-over-virtio or if they are creating their own rpmsg facility (and
whether they even support guest-host communication).
The QC implementation is different and does not use virtio - there is
a special HW interface between the main and the remote processors.
That configuration is valid since RPMSG can be implemented over
anything.
What I'm also wondering is what needs to be done differently for virtio
when communicating guest-host vs local CPU to remote processor. I was
From a kernel/guest perspective, not much should be needed. That said
the VMM will need to be supplemented with extra configuration
capabilities to instantiate the virtio-rpmsg device. But that is just
off the top of my head without seriously looking at the use case.
From a virtio-bus perspective, there might be an issue if a platform
is using remote processors _and_ also instantiating VMs that
configures a virtio-rpmsg device. Again, that is just off the top of
my head but needs to be taken into account.
I am new to rpmsg and virtio, and so my understanding of internals is
still very limited. Is there someone I can work with to determine what
needs to be done here? I am guessing that virtio either automatically
adapts to guest-host or rproc-host - in which case no changes may be
required - or else it requires a different setup and rpmsg will need to
be extended to allow for that. If there are changes to rpmsg required,
we'll want to get those submitted as soon as possible. One complication
for submitting our driver changes is that it is part of a much larger
effort to support new hardware, and it may not be possible to submit
them together with rpmsg changes.
hoping that RPMSG-over-VIRTIO would be easily adapted to what we need.
If we have to create a new virtio device (that is nearly identical to
rpmsg), that is going to push-out SR-IOV support a great deal, plus
requiring cloning of a lot of existing code for a new purpose.
Duplication of code would not be a viable way forward.
Reusing/enhancing/fixing what is currently available is definitely a
better option.
Our only other alternative is to do something to allow guest-host
communication to use the fabric loopback, which is not at all desirable
and has many issues of its own.
Is this the correct way to use RPMSG-over-VIRTIO? If so, what actions
need to be taken to cause a "rpmsg_ctl" device to be created? What
method would be used (in a kernel driver) to get a pointer to the
"rpmsg_ctl" device, for use with rpmsg_create_channel()?
Thanks,
Doug
External recipient
External recipient
External recipient