Re: [PATCHv4] pcie: Add driver for Downstream Port Containment

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Bjorn,

On Wed, May 11, 2016 at 01:28:53PM -0500, Bjorn Helgaas wrote:
> On Wed, May 11, 2016 at 01:06:50PM -0400, Keith Busch wrote:
> > On Wed, May 11, 2016 at 11:08:17AM -0500, Bjorn Helgaas wrote:
> > > 
> > > I know I already merged this, but I just thought of a more
> > > philosophical question.  Why is this a "port service driver" as
> > > opposed to being simply part of the PCI core like ARI, ACS, etc.?
> > > 
> > > I guess you did make DPC tristate, so from that point of view, it
> > > probably has to use the driver structure to make it convenient to load
> > > after boot.  Does that add value?  Do we expect people to make a
> > > decision about whether to load pcie-dpc?  Or maybe we plan to autoload
> > > it if the DPC capability is present?  Does your patch enable that,
> > > i.e., does it expose something udev can use to autoload it?
> > > 
> > > I've been wondering whether the portdrv service driver concept was a
> > > mistake.  With the exception of DPC, I think all the portdrv service
> > > drivers are bool, so there's no question of having to load a module.
> > > I think using portdrv means we delay initialization of some things
> > > that we should do earlier.  For example, I think we should setup
> > > pciehp much earlier, when we enumerate the bridge.  And the service
> > > driver registration and capability discovery code adds a fair amount
> > > of complication that I'm not sure is necessary.
> > > 
> > > So I guess the question is: why did you make DPC a portdrv service
> > > driver, and what bad things would happen if we integrated it into the
> > > PCI core without exposing it as a separate service.
> > 
> > Good points. I actually hadn't considered making this part of core,
> > so I don't have all the pros+cons for making this a port services
> > driver. Mainly, DPC seemed similar to HP and AER since it takes
> > interrupts, so followed their example. The other optional extended
> > capabilities you mention don't need interrupts. Since we rely on the
> > port driver to initialize port's interrupt vectors, DPC has to be a port
> > services driver in order to safely use these, right?
> 
> Yes, I think you're probably right about the interrupts; I hadn't
> thought about that.
> 
> > I made this tristate from vendor request to allow unloading the module;
> > sounded reasonable at the time, but didn't consider the implications.
> > I don't think anyone tested it that way, so may have been a mistake. We
> > don't have the rules for when it needs to load, so it'd have to load
> > manually at which point it looks too late.
> 
> I suppose the vendor concern might be minimizing unused code in the
> kernel, which makes a lot of sense, although I have to say that DPC
> looks like about the smallest driver one could imagine.
> 
> A modular driver that can't be loaded automatically seems sort of
> pointless since hardly anybody would actually use it.
> 
> Part of my issue with portdrv is that it exposes additional stuff in
> sysfs that seems a little bit disconnected from the rest of PCI,
> namely, things like this:
> 
>   /sys/bus/pci_express/devices/
>   /sys/devices/pci0000:00/0000:00:1c.0/0000:00:1c.0:pcie01/
> 
> Having devices like that might help autoload drivers, but as far as I
> can tell, only pciehp was ever supported as a modular driver (and it
> can no longer be modular), so I don't think this is much of a benefit.
> 
> Lukas, I think you're considering another portdrv service driver for
> Thunderbolt; do you have any thoughts on this?  Would you want your
> driver to be modular?

Yes, the driver is part of thunderbolt.ko, which is only used on Macs,
it's probably good to not spend RAM on it on non-Macs.

The reason I used a port service driver for this is because I need to
attach to the upstream bridge of the Thunderbolt controller, which is
a port, and there's no other way to do this but with a port service
driver. A Thunderbolt controller is a PCIe switch, i.e a collection
of bridges, and appears to the OS like this:

  (Root Port) ---- Upstream Bridge --+-- Downstream Bridge 0 ---- NHI
                                     +-- Downstream Bridge 1 --
                                     +-- Downstream Bridge 2 --
                                     ...

When power is cut to the controller, all these devices go to D3cold.
If this was a device-tree platform, these bridges would be part of
one generic power domain. But this is an ACPI platform, so to Linux
each bridge appears as a separate device in a hierarchy. By positioning
power control on the topmost device in the hierarchy (the upstream
bridge), I'm able to conform to Linux' power management model (a parent
always resumes before its children).

I found the ability to add a port service driver from a module quite
convenient and flexible, though I believe there's no other driver doing
this. (In case you grep over the Linux tree now to find other port service
drivers beyond drivers/pci/, be aware that mwifiex is a false positive,
it includes pcieport_if.h even though it doesn't need it, I've submitted
a patch to remove that, was queued for 4.7 by Kalle Valo today.)

Hope I've answered your question, if not just ask again.

For the curious, the Thunderbolt port service driver I'm referring to is:
https://github.com/l1k/linux/commit/ddbb53bc3f37006f0acd6795bb840603b9dab2eb

Thanks,

Lukas
--
To unsubscribe from this list: send the line "unsubscribe linux-pci" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [DMA Engine]     [Linux Coverity]     [Linux USB]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Greybus]

  Powered by Linux