Re: How to create/use RPMSG-over-VIRTIO devices in Linux

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 9/13/2024 9:39 AM, Mathieu Poirier wrote:
On Fri, 13 Sept 2024 at 05:46, Doug Miller
<doug.miller@xxxxxxxxxxxxxxxxxxxx> wrote:
On 9/12/2024 10:10 AM, Mathieu Poirier wrote:
On Wed, Sep 11, 2024 at 12:24:07PM -0500, Doug Miller wrote:
On 9/11/2024 11:12 AM, Mathieu Poirier wrote:
On Tue, 10 Sept 2024 at 09:43, Doug Miller
<doug.miller@xxxxxxxxxxxxxxxxxxxx> wrote:
On 9/10/2024 10:13 AM, Mathieu Poirier wrote:
On Tue, Sep 10, 2024 at 08:12:07AM -0500, Doug Miller wrote:
On 9/3/2024 10:52 AM, Doug Miller wrote:
I am trying to learn how to create an RPMSG-over-VIRTIO device
(service) in order to perform communication between a host driver and
a guest driver. The RPMSG-over-VIRTIO driver (client) side is fairly
well documented and there is a good example (starting point, at least)
in samples/rpmsg/rpmsg_client_sample.c.

I see that I can create an endpoint (struct rpmsg_endpoint) using
rpmsg_create_ept(), and from there I can use rpmsg_send() et al. and
the rpmsg_rx_cb_t cb to perform the communications. However, this
requires a struct rpmsg_device and it is not clear just how to get one
that is suitable for this purpose.

It appears that one or both of rpmsg_create_channel() and
rpmsg_register_device() are needed in order to obtain a device for the
specific host-guest communications channel. At some point, a "root"
device is needed that will use virtio (VIRTIO_ID_RPMSG) such that new
subdevices can be created for each host-guest pair.

In addition, building a kernel with CONFIG_RPMSG, CONFIG_RPMSG_VIRTIO,
and CONFIG_RPMSG_NS set, and doing a modprobe virtio_rpmsg_bus, seems
to get things setup but that does not result in creation of any "root"
rpmsg-over-virtio device. Presumably, any such device would have to be
setup to use a specific range of addresses and also be tied to
virtio_rpmsg_bus to ensure that virtio is used.

It is also not clear if/how register_rpmsg_driver() will be required
on the rpmsg driver side, even though the sample code does not use it.

So, first questions are:

* Am I looking at the correct interfaces in order to create the host
rpmsg device side?
* What needs to be done to get a "root" rpmsg-over-virtio device
created (if required)?
* How is a rpmsg-over-virtio device created for each host-guest driver
pair, for use with rpmsg_create_ept()?
* Does the guest side (rpmsg driver) require any special handling to
plug-in to the host driver (rpmsg device) side? Aside from using the
correct addresses to match device side.
It looks to me as though the virtio_rpmsg_bus module can create a
"rpmsg_ctl" device, which could be used to create channels from which
endpoints could be created. However, when I load the virtio_rpmsg_bus,
rpmsg_ns, and rpmsg_core modules there is no "rpmsg_ctl" device created
(this is running in the host OS, before any VMs are created/run).

At this time the modules stated above are all used when a main processor is
controlling a remote processor, i.e via the remoteproc subsystem.  I do not know
of an implementation where VIRTIO_ID_RPMSG is used in the context of a
host/guest scenario.  As such you will find yourself in uncharted territory.

At some point there were discussion via the OpenAMP body to standardize the
remoteproc's subsystem establishment of virtqueues to conform to a host/guest
scenario but was abandonned.  That would have been a step in the right direction
for what you are trying to do.
I was looking at some existing rpmsg code, at it appeared to me that
some adapters, like the "qcom", are creating an rpmsg device that
provides specialized methods for talking to the remote processor(s). I
have assumed this is because that hardware does not allow for running
something remotely that can utilize the virtio queues directly, and so
these rpmsg devices provide code to do the communication with their
hardware. What's not clear is whether these devices are using
rpmsg-over-virtio or if they are creating their own rpmsg facility (and
whether they even support guest-host communication).

The QC implementation is different and does not use virtio - there is
a special HW interface between the main and the remote processors.
That configuration is valid since RPMSG can be implemented over
anything.

What I'm also wondering is what needs to be done differently for virtio
when communicating guest-host vs local CPU to remote processor. I was
   From a kernel/guest perspective, not much should be needed.  That said
the VMM will need to be supplemented with extra configuration
capabilities to instantiate the virtio-rpmsg device.  But that is just
off the top of my head without seriously looking at the use case.
   From a virtio-bus perspective, there might be an issue if a platform
is using remote processors _and_ also instantiating VMs that
configures a virtio-rpmsg device.  Again, that is just off the top of
my head but needs to be taken into account.
I am new to rpmsg and virtio, and so my understanding of internals is
still very limited. Is there someone I can work with to determine what
needs to be done here? I am guessing that virtio either automatically
adapts to guest-host or rproc-host - in which case no changes may be
required - or else it requires a different setup and rpmsg will need to
be extended to allow for that. If there are changes to rpmsg required,
we'll want to get those submitted as soon as possible. One complication
for submitting our driver changes is that it is part of a much larger
effort to support new hardware, and it may not be possible to submit
them together with rpmsg changes.
The virtio part won't be a problem.  In your case what is missing is the glue
that will setup the virtqueues and install the RPMSG protocol on top of them.
The 'glue' is the new virtio-rpmsg device that needs to be created.  That part
includes the creation of a new virtio device by the VMM and a kernel driver that
can be called from the virtio_bus once it has been discovered.
I don't completely follow. Is there some KVM configuration option that
causes the virtio-rpmsg device to be created? And then our host driver
will need to be able to respond to some notification and dynamically
adapt to each VMM being started? I'm not getting a clear picture of how
this works. I'm also not clear on the responsibilities of our guest
driver(s) vs. our host driver. For virtio I saw there was the concept of
a "driver" side and a "device" side, and the guest seemed to be creating
the driver and the host created the device. The rpmsg layer seems to be
more complex in that area, so I'm not sure what actions our guest driver
with take vs. our host driver.
KVM has nothing to do with this.  The life of a virtio device starts
in the VMM (Virtual Machine Manager) where a backend device is created
and a virtio MMIO entry for that device is added to the device tree
that is fed to the VM kernel.  When the VM kernel boots the virtio
MMIO entry in the DT is parsed as part of the normal device discovery
process and a virtio-device is instantiated, added to the virtio-bus
and a driver is probed.

I suggest you start looking at that process using the kvmtool and a
simple virtio device such as virtio-rng.
Looking at the virtio-rng code in kvmtool, I must be missing something.
That looks like it is userland code and never calls into the kernel to
actually create any sort of device for VIRTIO_ID_RNG. It appears to just
add it to a private device list, and I'm not finding any place where
that list gets turned into real devices.

Are you saying that the virtio device on the host is not created until
the VM boots the guest kernel - meaning the VM kernel/driver must take
some action to cause the device to be created on the host? I was
expecting that when the host boots our driver would be creating some
sort of device or entry, and when guests boot our driver there would
register and get matched to the host device. It would really help if I
could see an end-to-end example of this working. But I need some help
identifying the various components involved.

Is it going to be necessary to modify the VMMs to get virtio_rpmsg_bus
devices created?
Everything in the virtio and RPMSG subsystems are aleady tailored to support all
this, so no changes should be needed.  As for the VMM, I suggest to start with
kvmtool.  Lastly, none of this requires "real" hardware or your specific
hardware - it can all be done from QEMU.

hoping that RPMSG-over-VIRTIO would be easily adapted to what we need.
If we have to create a new virtio device (that is nearly identical to
rpmsg), that is going to push-out SR-IOV support a great deal, plus
requiring cloning of a lot of existing code for a new purpose.
Duplication of code would not be a viable way forward.
Reusing/enhancing/fixing what is currently available is definitely a
better option.

Our only other alternative is to do something to allow guest-host
communication to use the fabric loopback, which is not at all desirable
and has many issues of its own.

Is this the correct way to use RPMSG-over-VIRTIO? If so, what actions
need to be taken to cause a "rpmsg_ctl" device to be created? What
method would be used (in a kernel driver) to get a pointer to the
"rpmsg_ctl" device, for use with rpmsg_create_channel()?

Thanks,
Doug

External recipient
External recipient
External recipient

External recipient


External recipient





[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux