Re: [PATCH v4 2/5] PCI/DOE: Add Data Object Exchange support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 8 Jul 2021 09:48:15 +0100
Lorenzo Pieralisi <lorenzo.pieralisi@xxxxxxx> wrote:

> On Wed, Jul 07, 2021 at 12:54:33PM -0700, Dan Williams wrote:
> > On Thu, Jun 10, 2021 at 1:06 PM Dan Williams <dan.j.williams@xxxxxxxxx> wrote:  
> > >
> > > On Mon, May 24, 2021 at 6:41 AM Jonathan Cameron
> > > <Jonathan.Cameron@xxxxxxxxxx> wrote:  
> > > >
> > > > Introduced in a PCI ECN [1], DOE provides a config space based mailbox with
> > > > standard protocol discovery.  Each mailbox is accessed through a DOE
> > > > Extended Capability.
> > > >
> > > > A device may have 1 or more DOE mailboxes, each of which is allowed to
> > > > support any number of protocols (some DOE protocol specifications apply
> > > > additional restrictions).  A given protocol may be supported on more than
> > > > one DOE mailbox on a given function.
> > > >
> > > > If a driver wishes to access any number of DOE instances / protocols it
> > > > makes a single call to pci_doe_register_all() which will find available
> > > > DOEs, create the required infrastructure and cache the protocols they
> > > > support.  pci_doe_find() can then retrieve a pointer to an appropriate DOE
> > > > instance.
> > > >
> > > > A synchronous interface is provided in pci_doe_exchange_sync() to perform a
> > > > single query / response exchange.
> > > >
> > > > Testing conducted against QEMU using:
> > > >
> > > > https://lore.kernel.org/qemu-devel/1619454964-10190-1-git-send-email-cbrowy@xxxxxxxxxxxxxxxx/  
> > >
> > > Nice.
> > >
> > > I was hoping that by now QEMU upstream would have given us some
> > > indication that this useful work that has a chance of being merged. I
> > > fear it's only us CXL practitioner's that care. Perhaps the PCI IDE
> > > support will get them to move on at least the DOE patches?
> > >  
> > > >
> > > > [1] https://members.pcisig.com/wg/PCI-SIG/document/14143
> > > >     Data Object Exchange (DOE) - Approved 12 March 2020
> > > >
> > > > Signed-off-by: Jonathan Cameron <Jonathan.Cameron@xxxxxxxxxx>  
> > >
> > > The core logic of this looks good to me. The interfaces for other code
> > > to make use of this I feel can lean heavier on existing mechanics. A
> > > few points come to mind:
> > >
> > > - Does this need to support anything more than queue depth 1? I know
> > > the specification seems to allow for some overlapping and queueing,
> > > but I don't think there are any use cases that are precluded if the
> > > max number of tasks in flight for a given DOE is one.

We aren't handling overlapping today, so a queue of one and wait on access
to that would work fine.  Queuing is more likely to be about multiple protocols
sharing a DOE (where allowed) as I'm not seeing a need in any individual protocol
to queue lots of things up at once. You could queue a few things in CMA as they
aren't all dependent on each other (though the order matters for building the
measurements) but I'm not sure why you would.

> > >
> > > - Once its queue depth 1 then the list of tasks can be replaced with a
> > > wait_queue_head_t where submitters wait for the previous task to
> > > finish.

I'll go with a resounding 'maybe' on that... I'm not sure I like that the
async path in the submitter could be waiting a very long time (if we think
anyone will actually do async...)  I'm probably missing something.

> > >
> > > - This appears to be the prototypical scenario for deploying the new
> > > auxiliary bus facility. Rather than custom code device-like facilities
> > > (lists and parents etc) in 'struct pci_doe' just make pci_doe a device
> > > directly (auxiliary-device) and separate the infrastructure that
> > > drives that device to a driver (auxiliary-driver). That makes the
> > > lifetime management more idiomatic, allows for user space to have
> > > typical driver-binding controls to manage kernel-user DOE conflicts,
> > > and it allows for typical driver services like devm.  
> > 
> > Hi Jonathan,
> > 
> > Are you waiting on me to take a shot at refactoring the DOE driver
> > into this proposed auxiliary device/driver organization? I am happy to
> > do that if you've moved on to looking at the kernel-side SPDM
> > implementation [1].

I wasn't really expecting you to take a shot, but if you have time, that
would be great. I've not looked at the auxiliary bus stuff yet.

Whilst I have a PoC up and running for SPDM, it's a bunch of dirty hacks and
I suspect that, even when cleaned up, it's going to keep me busy for a while.

Mostly I've developed a steadily growing hatred for the different ways in
which signatures can be stored and hand decoding records or digging in
openspdm to figure out what order and encoding things are in this time...

> > 
> > I would expect DOE,  SPDM, and IDE would be a useful topic to discuss
> > at the the Plumbers PCI Microconference assuming we do not solve all
> > the open issues before September.  
> 
> Definitely, I will make sure we schedule a slot on these topics.

+1

I'm sure there will be some issues left :)

> 
> Thanks,
> Lorenzo
> 
> > [1]: https://lore.kernel.org/r/20210629132520.00000d1f@xxxxxxxxxx  




[Index of Archives]     [DMA Engine]     [Linux Coverity]     [Linux USB]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Greybus]

  Powered by Linux