Re: [PATCH v2 4/5] PCI: endpoint: Add NVMe endpoint function driver

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 10/14/24 17:44, Christoph Hellwig wrote:
> For one please keep nvme target code in drivers/nvme/  PCI endpoint is
> just another transport and should not have device class logic.
> 
> But I also really fail to understand the architecture of the whole
> thing.  It is a target driver and should in no way tie into the NVMe
> host code, the host code runs on the other side of the PCIe wire.

Nope, it is not a target driver. It is a PCI endpoint driver which turns the
host running it into a PCIe NVMe device. But the NVMe part implementation is
minimal. Instead I use an endpoint local fabrics host controller which is itself
connected to whatever target you want (loop, tcp, ...).

Overall, it looks like this:

         +-----------------------------------+
         | PCIe Host Machine (Root-Complex)  |
         | (BIOS, Grub, Linux, Windows, ...) |
         |                                   |
         |       +------------------+        |
         |       | NVMe PCIe driver |        |
         +-------+------------------+--------+
                           |
                 PCIe bus  |
                           |
        +----+---------------------------+-----+
        |    | PCIe NVMe endpoint driver |     |
	|    | (Handles BAR registers,   |     |
	|    | doorbells, IRQs, SQs, CQs |     |
	|    | and DMA transfers)        |     |
        |    +---------------------------+     |
        |                  |                   |
        |    +---------------------------+     |
        |    |     NVMe fabrics host     |     |
        |    +---------------------------+     |
        |                  |                   |
        |    +---------------------------+     |
        |    |     NVMe fabrics target   |     |
        |    |      (loop, TCP, ...)     |     |
        |    +---------------------------+     |
        |                                      |
        | PCIe Endpoint Machine (e.g. Rock 5B) |
        +--------------------------------------+

The nvme target can be anything that can be supported with the PCI Endpoint
Machine. With a small board like the Rock 5B, it is loop (file and block
device), TCP target or NVMe passtrhough (using the PCIe Gen2 M.2 E-Key slot).

Unless I am mistaken, if I use a PCI transport as the base for the endpoint
driver, I would be able to connect only to a PCIe nvme device as the backend, no
? With the above design, I can use anything support by nvmf as backend and that
is exposed to the root-complex host through the nvme endpoint PCIe driver.
To do that, the PCI endpoint driver mostly need only to create the fabrics host
with nvmf_create_ctrl(), which connects to the target and then the nvme endpoint
driver can execute the nvme commands with __nvme_submit_sync_cmd().
Only some admin commands need special handling (e.g. create sq/cq).

-- 
Damien Le Moal
Western Digital Research




[Index of Archives]     [DMA Engine]     [Linux Coverity]     [Linux USB]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Greybus]

  Powered by Linux