remoteproc over PCIe
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
- To: linux-remoteproc@xxxxxxxxxxxxxxx
- Subject: remoteproc over PCIe
- From: Simon Maurer <mail@maurer.systems>
- Date: Tue, 9 May 2023 15:42:27 +0200
- User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.5.1
Hi,
I've got a "Zynq PCIe FMC Carrier Evaluation Board" with a x86 host. The
Zynq 7000 is an FPGA with 2x Cortex-A9, on which ZephyrOS with the
OpenAMP framework is running. The VirtIO ring and the buffer for the
RPMSGs are located in the nocache memory section of the ZephyrOS. The
card DDR-RAM and the CPU control registers are mapped into PCIe BARs. On
the FPGA the "AXI Memory Mapped To PCI Express" IP core is used, so the
kernel has mmio access to the card DDR-RAM.
Besides the kernel module, I had to do a few modifications elsewhere in
the kernel. In remoteproc_virtio.c I implemented the get_shm_region
function for the rproc_virtio_config_ops. This gives access to the rpmsg
buffer, that is already mapped. In virtio_rpmsg_bus.c this function is
used instead of allocating a new region. This is just a proof of
concept, but it seems to be working, ttyRPMSG is created and I can send
and receive messages.
But what would be the clean way to do this? I'm thinking about
implementing dma_map_ops for the vdev, but maybe there is a better solution?
Best regards,
Simon
[Index of Archives]
[Linux Sound]
[ALSA Users]
[ALSA Devel]
[Linux Audio Users]
[Linux Media]
[Kernel]
[Photo Sharing]
[Gimp]
[Yosemite News]
[Linux Media]