On Tue, 1 Feb 2022 12:43:01 -0800 Dan Williams <dan.j.williams@xxxxxxxxx> wrote: > On Mon, Jan 31, 2022 at 10:11 AM Jonathan Cameron > <Jonathan.Cameron@xxxxxxxxxx> wrote: > > > > On Wed, 26 Jan 2022 12:16:52 -0800 > > Dan Williams <dan.j.williams@xxxxxxxxx> wrote: > > > > > From: Ben Widawsky <ben.widawsky@xxxxxxxxx> > > > > > > The need for a CXL port driver and a dedicated cxl_bus_type is driven by > > > a need to simultaneously support 2 independent physical memory decode > > > domains (cache coherent CXL.mem and uncached PCI.mmio) that also > > > intersect at a single PCIe device node. A CXL Port is a device that > > > advertises a CXL Component Register block with an "HDM Decoder > > > Capability Structure". > > > > > > >From Documentation/driver-api/cxl/memory-devices.rst: > > > > > > Similar to how a RAID driver takes disk objects and assembles them into > > > a new logical device, the CXL subsystem is tasked to take PCIe and ACPI > > > objects and assemble them into a CXL.mem decode topology. The need for > > > runtime configuration of the CXL.mem topology is also similar to RAID in > > > that different environments with the same hardware configuration may > > > decide to assemble the topology in contrasting ways. One may choose > > > performance (RAID0) striping memory across multiple Host Bridges and > > > endpoints while another may opt for fault tolerance and disable any > > > striping in the CXL.mem topology. > > > > > > The port driver identifies whether an endpoint Memory Expander is > > > connected to a CXL topology. If an active (bound to the 'cxl_port' > > > driver) CXL Port is not found at every PCIe Switch Upstream port and an > > > active "root" CXL Port then the device is just a plain PCIe endpoint > > > only capable of participating in PCI.mmio and DMA cycles, not CXL.mem > > > coherent interleave sets. > > > > > > The 'cxl_port' driver lets the CXL subsystem leverage driver-core > > > infrastructure for setup and teardown of register resources and > > > communicating device activation status to userspace. The cxl_bus_type > > > can rendezvous the async arrival of platform level CXL resources (via > > > the 'cxl_acpi' driver) with the asynchronous enumeration of Memory > > > Expander endpoints, while also implementing a hierarchical locking model > > > independent of the associated 'struct pci_dev' locking model. The > > > locking for dport and decoder enumeration is now handled in the core > > > rather than callers. > > > > > > For now the port driver only enumerates and registers CXL resources > > > (downstream port metadata and decoder resources) later it will be used > > > to take action on its decoders in response to CXL.mem region > > > provisioning requests. > > > > > > > > Reported-by: kernel test robot <lkp@xxxxxxxxx> > > > Signed-off-by: Ben Widawsky <ben.widawsky@xxxxxxxxx> > > > [djbw: add theory of operation document, move enumeration infra to core] > > > Signed-off-by: Dan Williams <dan.j.williams@xxxxxxxxx> > > > > Nice docs. A few comments inline > > > > All trivial though, so > > > > Reviewed-by: Jonathan Cameron <Jonathan.Cameron@xxxxxxxxxx> > > > > > > > > ... > > > > > diff --git a/drivers/cxl/core/port.c b/drivers/cxl/core/port.c > > > index 2b09d04d3568..682e7cdbcc9c 100644 > > > --- a/drivers/cxl/core/port.c > > > +++ b/drivers/cxl/core/port.c > > > @@ -40,6 +40,11 @@ static int cxl_device_id(struct device *dev) > > > > ... > > > > > > > > +/* > > > + * Since root-level CXL dports cannot be enumerated by PCI they are not > > > + * enumerated by the common port driver that acquires the port lock over > > > + * dport add/remove. Instead, root dports are manually added by a > > > + * platform driver and cond_port_lock() is used to take the missing port > > > + * lock in that case. > > > + */ > > > +static void cond_port_lock(struct cxl_port *port) > > > > Could the naming here make it clear what the condition is? > > cxl_port_lock_if_root(), or something like that? > > Sure, how about cond_cxl_root_lock()? Where the cond_ prefix is > matching other helpers like cond_resched(). Works for me. Thanks,