On Tue, 20 Apr 2021 00:54:49 +0800 Jonathan Cameron <Jonathan.Cameron@xxxxxxxxxx> wrote: > Introduced in a PCI ECN [1], DOE provides a config space > based mailbox with standard protocol discovery. Each mailbox > is accessed through a DOE Extended Capability. > > A device may have 1 or more DOE mailboxes, each of which is allowed > to support any number of protocols (some DOE protocol > specifications apply additional restrictions). A given protocol > may be supported on more than one DOE mailbox on a given function. > > If a driver wishes to access any number of DOE instances / protocols > it makes a single call to pcie_doe_register_all() which will find > available DOEs, create the required infrastructure and cache the > protocols they support. pcie_doe_find() can then retrieve a > pointer to an appropriate DOE instance. > > A synchronous interface is provided in pcie_doe_exchange_sync() to > perform a single query / response exchange. > > Testing conducted against QEMU using: > > https://lore.kernel.org/qemu-devel/1612900760-7361-1-git-send-email-cbrowy@xxxxxxxxxxxxxxxx/ > + fix for interrupt flag mentioned in that thread and a whole load > of hacks to exercise error paths etc. > > [1] https://members.pcisig.com/wg/PCI-SIG/document/14143 > Data Object Exchange (DOE) - Approved 12 March 2020 > > Signed-off-by: Jonathan Cameron <Jonathan.Cameron@xxxxxxxxxx> > --- ... > +static int pci_doe_recv_resp(struct pci_doe *doe, struct pci_doe_exchange *ex) > +{ > + struct pci_dev *pdev = doe->pdev; > + size_t length; > + u32 val; > + int i; > + > + /* Read the first two dwords to get the length and protocol */ > + pci_read_config_dword(pdev, doe->cap + PCI_DOE_READ, &val); > + if ((FIELD_GET(PCI_DOE_DATA_OBJECT_HEADER_1_VID, val) != ex->vid) || > + (FIELD_GET(PCI_DOE_DATA_OBJECT_HEADER_1_TYPE, val) != ex->protocol)) { > + pci_err(pdev, > + "Expected [VID, Protocol] = [%x, %x], got [%x, %x]\n", > + ex->vid, ex->protocol, > + FIELD_GET(PCI_DOE_DATA_OBJECT_HEADER_1_VID, val), > + FIELD_GET(PCI_DOE_DATA_OBJECT_HEADER_1_TYPE, val)); > + return -EIO; > + } > + > + pci_write_config_dword(pdev, doe->cap + PCI_DOE_READ, 0); > + pci_read_config_dword(pdev, doe->cap + PCI_DOE_READ, &val); > + pci_write_config_dword(pdev, doe->cap + PCI_DOE_READ, 0); > + > + length = FIELD_GET(PCI_DOE_DATA_OBJECT_HEADER_2_LENGTH, val); > + if (length > SZ_1M) > + return -EIO; > + > + /* Read the rest of the response payload */ > + for (i = 0; i < min(length, ex->response_pl_sz / sizeof(u32)); i++) { Note for anyone testing these that there is a bug here which leads to a buffer underflow triggered reset with the latest QEMU patches (I've not figured out yet why this didn't trigger a problem with the earlier QEMU patch versions). This needs to take into account that length includes the two header DW, but the response_pl_sz does not. > + pci_read_config_dword(pdev, doe->cap + PCI_DOE_READ, > + &ex->response_pl[i]); > + pci_write_config_dword(pdev, doe->cap + PCI_DOE_READ, 0); > + } > + > + /* Flush excess length */ > + for (; i < length; i++) { > + pci_read_config_dword(pdev, doe->cap + PCI_DOE_READ, &val); > + pci_write_config_dword(pdev, doe->cap + PCI_DOE_READ, 0); > + } > + /* Final error check to pick up on any since Data Object Ready */ > + pci_read_config_dword(pdev, doe->cap + PCI_DOE_STATUS, &val); > + if (FIELD_GET(PCI_DOE_STATUS_ERROR, val)) > + return -EIO; > + > + return min(length, ex->response_pl_sz / sizeof(u32)) * sizeof(u32); > +} > +