On Sun, Feb 25, 2024 at 09:39:26PM +0530, Manivannan Sadhasivam wrote: > On Sat, Feb 24, 2024 at 10:03:59PM +0100, Wadim Mueller wrote: > > Hello, > > > > This series adds support for the Block Passthrough PCI(e) Endpoint functionality. > > PCI Block Device Passthrough allows one Linux Device running in EP mode to expose its Block devices to the PCI(e) host (RC). The device can export either the full disk or just certain partitions. > > Also an export in readonly mode is possible. This is useful if you want to share the same blockdevice between different SoCs, providing each SoC its own partition(s). > > > > > > Block Passthrough > > ================== > > The PCI Block Passthrough can be a useful feature if you have multiple SoCs in your system connected > > through a PCI(e) link, one running in RC mode, the other in EP mode. > > If the block devices are connected to one SoC (SoC2 in EP Mode from the diagramm below) and you want to access > > those from the other SoC (SoC1 in RC mode below), without having any direct connection to > > those block devices (e.g. if you want to share an NVMe between two SoCs). An simple example of such a configurationis is shown below: > > > > > > +-------------+ > > | | > > | SD Card | > > | | > > +------^------+ > > | > > | > > +--------------------------+ +-----------------v----------------+ > > | | PCI(e) | | > > | SoC1 (RC) |<-------------->| SoC2 (EP) | > > | (CONFIG_PCI_REMOTE_DISK) | |(CONFIG_PCI_EPF_BLOCK_PASSTHROUGH)| > > | | | | > > +--------------------------+ +-----------------^----------------+ > > | > > | > > +------v------+ > > | | > > | NVMe | > > | | > > +-------------+ > > > > > > This is to a certain extent a similar functionality which NBD exposes over Network, but on the PCI(e) bus utilizing the EPC/EPF Kernel Framework. > > > > The Endpoint Function driver creates parallel Queues which run on seperate CPU Cores using percpu structures. The number of parallel queues is limited > > by the number of CPUs on the EP device. The actual number of queues is configurable (as all other features of the driver) through CONFIGFS. > > > > A documentation about the functional description as well as a user guide showing how both drivers can be configured is part of this series. > > > > Test setup > > ========== > > > > This series has been tested on an NXP S32G2 SoC running in Endpoint mode with a direct connection to an ARM64 host machine. > > > > A performance measurement on the described setup shows good performance metrics. The S32G2 SoC has a 2xGen3 link which has a maximum Bandwidth of ~2GiB/s. > > With the explained setup a Read Datarate of 1.3GiB/s (with DMA ... without DMA the speed saturated at ~200MiB/s) was achieved using an 512GiB Kingston NVMe > > when accessing the NVMe from the ARM64 (SoC1) Host. The local Read Datarate accessing the NVMe dirctly from the S32G2 (SoC2) was around 1.5GiB. > > > > The measurement was done through the FIO tool [1] with 4kiB Blocks. > > > > [1] https://linux.die.net/man/1/fio > > > > Thanks for the proposal! We are planning to add virtio function support to > endpoint subsystem to cover usecases like this. I think your usecase can be > satisfied using vitio-blk. Maybe you can add the virtio-blk endpoint function > support once we have the infra in place. Thoughts? > > - Mani > Hi Mani, I initially had the plan to implement the virtio-blk as an endpoint function driver instead of a self baked driver. This for sure is more elegant as we could reuse the virtio-blk pci driver instead of implementing a new one (as I did) But I initially had some concerns about the feasibility, especially that the virtio-blk pci driver is expecting immediate responses to some register writes which I would not be able to satisfy, simply because we do not have any kind of interrupt/event which would be triggered on the EP side when the RC is accessing some BAR Registers (at least there is no machanism I know of). As virtio is made mainly for Hypervisor <-> Guest communication I was afraid that a Hypersisor is able to Trap every Register access from the Guest and act accordingly, which I would not be able to do. I hope this make sense to you. But to make a long story short, yes I agree with you that virtio-blk would satisfy my usecase, and I generally think it would be a better solution, I just did not know that you are working on some infrastructure for that. And yes I would like to implement the endpoint function driver for virtio-blk. Is there already an development tree you use to work on the infrastructre I could have a look at? - Wadim > > Wadim Mueller (3): > > PCI: Add PCI Endpoint function driver for Block-device passthrough > > PCI: Add PCI driver for a PCI EP remote Blockdevice > > Documentation: PCI: Add documentation for the PCI Block Passthrough > > > > .../function/binding/pci-block-passthru.rst | 24 + > > Documentation/PCI/endpoint/index.rst | 3 + > > .../pci-endpoint-block-passthru-function.rst | 331 ++++ > > .../pci-endpoint-block-passthru-howto.rst | 158 ++ > > MAINTAINERS | 8 + > > drivers/block/Kconfig | 14 + > > drivers/block/Makefile | 1 + > > drivers/block/pci-remote-disk.c | 1047 +++++++++++++ > > drivers/pci/endpoint/functions/Kconfig | 12 + > > drivers/pci/endpoint/functions/Makefile | 1 + > > .../functions/pci-epf-block-passthru.c | 1393 +++++++++++++++++ > > include/linux/pci-epf-block-passthru.h | 77 + > > 12 files changed, 3069 insertions(+) > > create mode 100644 Documentation/PCI/endpoint/function/binding/pci-block-passthru.rst > > create mode 100644 Documentation/PCI/endpoint/pci-endpoint-block-passthru-function.rst > > create mode 100644 Documentation/PCI/endpoint/pci-endpoint-block-passthru-howto.rst > > create mode 100644 drivers/block/pci-remote-disk.c > > create mode 100644 drivers/pci/endpoint/functions/pci-epf-block-passthru.c > > create mode 100644 include/linux/pci-epf-block-passthru.h > > > > -- > > 2.25.1 > > > > -- > மணிவண்ணன் சதாசிவம்