Re: [PATCH v4 2/5] PCI/DOE: Add Data Object Exchange support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jul 07, 2021 at 12:54:33PM -0700, Dan Williams wrote:
> On Thu, Jun 10, 2021 at 1:06 PM Dan Williams <dan.j.williams@xxxxxxxxx> wrote:
> >
> > On Mon, May 24, 2021 at 6:41 AM Jonathan Cameron
> > <Jonathan.Cameron@xxxxxxxxxx> wrote:
> > >
> > > Introduced in a PCI ECN [1], DOE provides a config space based mailbox with
> > > standard protocol discovery.  Each mailbox is accessed through a DOE
> > > Extended Capability.
> > >
> > > A device may have 1 or more DOE mailboxes, each of which is allowed to
> > > support any number of protocols (some DOE protocol specifications apply
> > > additional restrictions).  A given protocol may be supported on more than
> > > one DOE mailbox on a given function.
> > >
> > > If a driver wishes to access any number of DOE instances / protocols it
> > > makes a single call to pci_doe_register_all() which will find available
> > > DOEs, create the required infrastructure and cache the protocols they
> > > support.  pci_doe_find() can then retrieve a pointer to an appropriate DOE
> > > instance.
> > >
> > > A synchronous interface is provided in pci_doe_exchange_sync() to perform a
> > > single query / response exchange.
> > >
> > > Testing conducted against QEMU using:
> > >
> > > https://lore.kernel.org/qemu-devel/1619454964-10190-1-git-send-email-cbrowy@xxxxxxxxxxxxxxxx/
> >
> > Nice.
> >
> > I was hoping that by now QEMU upstream would have given us some
> > indication that this useful work that has a chance of being merged. I
> > fear it's only us CXL practitioner's that care. Perhaps the PCI IDE
> > support will get them to move on at least the DOE patches?
> >
> > >
> > > [1] https://members.pcisig.com/wg/PCI-SIG/document/14143
> > >     Data Object Exchange (DOE) - Approved 12 March 2020
> > >
> > > Signed-off-by: Jonathan Cameron <Jonathan.Cameron@xxxxxxxxxx>
> >
> > The core logic of this looks good to me. The interfaces for other code
> > to make use of this I feel can lean heavier on existing mechanics. A
> > few points come to mind:
> >
> > - Does this need to support anything more than queue depth 1? I know
> > the specification seems to allow for some overlapping and queueing,
> > but I don't think there are any use cases that are precluded if the
> > max number of tasks in flight for a given DOE is one.
> >
> > - Once its queue depth 1 then the list of tasks can be replaced with a
> > wait_queue_head_t where submitters wait for the previous task to
> > finish.
> >
> > - This appears to be the prototypical scenario for deploying the new
> > auxiliary bus facility. Rather than custom code device-like facilities
> > (lists and parents etc) in 'struct pci_doe' just make pci_doe a device
> > directly (auxiliary-device) and separate the infrastructure that
> > drives that device to a driver (auxiliary-driver). That makes the
> > lifetime management more idiomatic, allows for user space to have
> > typical driver-binding controls to manage kernel-user DOE conflicts,
> > and it allows for typical driver services like devm.
> 
> Hi Jonathan,
> 
> Are you waiting on me to take a shot at refactoring the DOE driver
> into this proposed auxiliary device/driver organization? I am happy to
> do that if you've moved on to looking at the kernel-side SPDM
> implementation [1].
> 
> I would expect DOE,  SPDM, and IDE would be a useful topic to discuss
> at the the Plumbers PCI Microconference assuming we do not solve all
> the open issues before September.

Definitely, I will make sure we schedule a slot on these topics.

Thanks,
Lorenzo

> [1]: https://lore.kernel.org/r/20210629132520.00000d1f@xxxxxxxxxx



[Index of Archives]     [DMA Engine]     [Linux Coverity]     [Linux USB]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Greybus]

  Powered by Linux