Re: [PATCH v3 1/2] PCI: Take other bus devices into account when distributing resources

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Dec 05, 2022 at 09:28:30AM +0200, Mika Westerberg wrote:
> On Fri, Dec 02, 2022 at 05:34:24PM -0600, Bjorn Helgaas wrote:
> > On Wed, Nov 30, 2022 at 01:22:20PM +0200, Mika Westerberg wrote:
> > > A PCI bridge may reside on a bus with other devices as well. The
> > > resource distribution code does not take this into account properly and
> > > therefore it expands the bridge resource windows too much, not leaving
> > > space for the other devices (or functions a multifunction device) and

> > > +		 * Reduce the space available for distribution by the
> > > +		 * amount required by the other devices on the same bus
> > > +		 * as this bridge.
> > > +		 */
> > > +		list_for_each_entry(dev, &bus->devices, bus_list) {
> > > +			int i;
> > > +
> > > +			if (dev == bridge)
> > > +				continue;
> > 
> > Why do we skip "bridge"?  Bridges are allowed to have two BARs
> > themselves, and it seems like they should be included here.
> 
> Good point but then we would need to skip below the bridge window
> resources to avoid accounting them.

Seems like we should handle bridge BARs.  There are definitely bridges
(PCIe for sure, I dunno about conventional PCI) that implement them
and some drivers starting to appear that use them for performance
monitoring, etc.

> > This only happens for buses with a single bridge.  Shouldn't it happen
> > regardless of how many bridges there are?
> 
> This branch specifically deals with the "upstream port" so it gives all
> the spare resources to that upstream port. The whole resource
> distribution is actually done to accommondate Thunderbolt/USB4
> topologies which involve only PCIe devices so we always have PCIe
> upstream port and downstream ports which some of them are able to
> perform native PCIe hotplug. And for those ports we want to distribute
> the available resources so that they can expand to further topologies.
> 
> I'm slightly concerned that forcing this to support the "generic" PCI
> case makes this rather complicated. This is something that never appears
> in the regular PCI based systems because we never distribute resources
> for those in the first place (->is_hotplug_bridge needs to be set).

This code is fairly complicated in any case :)

I understand why this is useful for Thunderbolt topologies, but it
should be equally useful for other hotplug topologies because at this
level we're purely talking about the address space needed by devices
and how that space is assigned and routed through bridges.  Nothing
unique to Thunderbolt here.

I don't think we should make this PCIe-specific.  ->is_hotplug_bridge
is set by a PCIe path (set_pcie_hotplug_bridge()), but also by
check_hotplug_bridge() in acpiphp, which could be any flavor of PCI,
and I don't think there's anything intrinsically PCIe-specific about
it.

> > I don't understand the "bridge" part; it looks like that's basically
> > to use 4K alignment for I/O windows and 1M for memory windows?
> > Using "bridge" seems like a clunky way to figure that out.
> 
> Okay, but if not using "bridge", how exactly you suggest to doing the
> calculation?

I was thinking it would always be 4K or 1M, but I guess that's
actually not true.  There are some Intel bridges that support 1K
alignment for I/O windows, and some powerpc hypervisor stuff that can
also influence the alignment.  And it looks like we still need to
figure out which b_res to use, so we couldn't get rid of the IO/MEM
case analysis.  So never mind, I guess ...

Bjorn



[Index of Archives]     [DMA Engine]     [Linux Coverity]     [Linux USB]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Greybus]

  Powered by Linux