Hello, This series adds support for the Block Passthrough PCI(e) Endpoint functionality. PCI Block Device Passthrough allows one Linux Device running in EP mode to expose its Block devices to the PCI(e) host (RC). The device can export either the full disk or just certain partitions. Also an export in readonly mode is possible. This is useful if you want to share the same blockdevice between different SoCs, providing each SoC its own partition(s). Block Passthrough ================== The PCI Block Passthrough can be a useful feature if you have multiple SoCs in your system connected through a PCI(e) link, one running in RC mode, the other in EP mode. If the block devices are connected to one SoC (SoC2 in EP Mode from the diagramm below) and you want to access those from the other SoC (SoC1 in RC mode below), without having any direct connection to those block devices (e.g. if you want to share an NVMe between two SoCs). An simple example of such a configurationis is shown below: +-------------+ | | | SD Card | | | +------^------+ | | +--------------------------+ +-----------------v----------------+ | | PCI(e) | | | SoC1 (RC) |<-------------->| SoC2 (EP) | | (CONFIG_PCI_REMOTE_DISK) | |(CONFIG_PCI_EPF_BLOCK_PASSTHROUGH)| | | | | +--------------------------+ +-----------------^----------------+ | | +------v------+ | | | NVMe | | | +-------------+ This is to a certain extent a similar functionality which NBD exposes over Network, but on the PCI(e) bus utilizing the EPC/EPF Kernel Framework. The Endpoint Function driver creates parallel Queues which run on seperate CPU Cores using percpu structures. The number of parallel queues is limited by the number of CPUs on the EP device. The actual number of queues is configurable (as all other features of the driver) through CONFIGFS. A documentation about the functional description as well as a user guide showing how both drivers can be configured is part of this series. Test setup ========== This series has been tested on an NXP S32G2 SoC running in Endpoint mode with a direct connection to an ARM64 host machine. A performance measurement on the described setup shows good performance metrics. The S32G2 SoC has a 2xGen3 link which has a maximum Bandwidth of ~2GiB/s. With the explained setup a Read Datarate of 1.3GiB/s (with DMA ... without DMA the speed saturated at ~200MiB/s) was achieved using an 512GiB Kingston NVMe when accessing the NVMe from the ARM64 (SoC1) Host. The local Read Datarate accessing the NVMe dirctly from the S32G2 (SoC2) was around 1.5GiB. The measurement was done through the FIO tool [1] with 4kiB Blocks. [1] https://linux.die.net/man/1/fio Wadim Mueller (3): PCI: Add PCI Endpoint function driver for Block-device passthrough PCI: Add PCI driver for a PCI EP remote Blockdevice Documentation: PCI: Add documentation for the PCI Block Passthrough .../function/binding/pci-block-passthru.rst | 24 + Documentation/PCI/endpoint/index.rst | 3 + .../pci-endpoint-block-passthru-function.rst | 331 ++++ .../pci-endpoint-block-passthru-howto.rst | 158 ++ MAINTAINERS | 8 + drivers/block/Kconfig | 14 + drivers/block/Makefile | 1 + drivers/block/pci-remote-disk.c | 1047 +++++++++++++ drivers/pci/endpoint/functions/Kconfig | 12 + drivers/pci/endpoint/functions/Makefile | 1 + .../functions/pci-epf-block-passthru.c | 1393 +++++++++++++++++ include/linux/pci-epf-block-passthru.h | 77 + 12 files changed, 3069 insertions(+) create mode 100644 Documentation/PCI/endpoint/function/binding/pci-block-passthru.rst create mode 100644 Documentation/PCI/endpoint/pci-endpoint-block-passthru-function.rst create mode 100644 Documentation/PCI/endpoint/pci-endpoint-block-passthru-howto.rst create mode 100644 drivers/block/pci-remote-disk.c create mode 100644 drivers/pci/endpoint/functions/pci-epf-block-passthru.c create mode 100644 include/linux/pci-epf-block-passthru.h -- 2.25.1