Hello Peter, On Tue, Oct 04, 2022 at 03:52:39PM -0400, Peter Geis wrote: > On Tue, Oct 4, 2022 at 10:43 AM Ondrej Jirman <megi@xxxxxx> wrote: > > Good Afternoon, > > > > > > I have two Realtek PCIe wifi cards connected over the 4 port > > PCIe bridge to Quartz64-A. The cards fail to work, when nvme > > SSD is connected at the same time to the bridge. Without nvme > > connected, cards work fine. The issue seems to be related > > to mixed use of devices which make use of I/O ranges and memory > > ranges. > > > > This mapping is designed to be more straightforward, inspired by > > dt-bindings docs for sample pcie3x2 node: > > > > reg = <0x3 0xc0800000 0x0 0x390000>, > > <0x0 0xfe280000 0x0 0x10000>, > > <0x3 0x80000000 0x0 0x100000>; > > ranges = <0x81000000 0x0 0x80800000 0x3 0x80800000 0x0 0x100000>, > > <0x83000000 0x0 0x80900000 0x3 0x80900000 0x0 0x3f700000>; > > > > I noticed that this is crafted so that there doesn't need to be > > any translation other than dropping the high dword bits, and I > > modified the ranges for pcie2x1 to follow the same principle. > > > > This change to the regs/ranges makes the issue go away and both > > nvme and wifi cards work when connected at the same time to the > > bridge. > > > > Signed-off-by: Ondrej Jirman <megi@xxxxxx> > > --- > > arch/arm64/boot/dts/rockchip/rk356x.dtsi | 7 ++++--- > > 1 file changed, 4 insertions(+), 3 deletions(-) > > > > diff --git a/arch/arm64/boot/dts/rockchip/rk356x.dtsi b/arch/arm64/boot/dts/rockchip/rk356x.dtsi > > index 319981c3e9f7..e88e8c4fe25b 100644 > > --- a/arch/arm64/boot/dts/rockchip/rk356x.dtsi > > +++ b/arch/arm64/boot/dts/rockchip/rk356x.dtsi > > @@ -855,7 +855,8 @@ pcie2x1: pcie@fe260000 { > > compatible = "rockchip,rk3568-pcie"; > > reg = <0x3 0xc0000000 0x0 0x00400000>, > > <0x0 0xfe260000 0x0 0x00010000>, > > - <0x3 0x3f000000 0x0 0x01000000>; > > + <0x3 0x00000000 0x0 0x01000000>; > > + > > reg-names = "dbi", "apb", "config"; > > interrupts = <GIC_SPI 75 IRQ_TYPE_LEVEL_HIGH>, > > <GIC_SPI 74 IRQ_TYPE_LEVEL_HIGH>, > > @@ -884,8 +885,8 @@ pcie2x1: pcie@fe260000 { > > phys = <&combphy2 PHY_TYPE_PCIE>; > > phy-names = "pcie-phy"; > > power-domains = <&power RK3568_PD_PIPE>; > > - ranges = <0x01000000 0x0 0x3ef00000 0x3 0x3ef00000 0x0 0x00100000 > > - 0x02000000 0x0 0x00000000 0x3 0x00000000 0x0 0x3ef00000>; > > + ranges = <0x01000000 0x0 0x01000000 0x3 0x01000000 0x0 0x00100000 > > + 0x02000000 0x0 0x02000000 0x3 0x02000000 0x0 0x3e000000>; > > Have you verified these ranges do not regress the NVMe drive when it > is connected directly to the controller? The reason we went with the > configuration space we did was because the original space from > downstream caused errors on NVMe drives when reading large amounts > (>1GB) of data at a time. I did. Anyway... looking at the ranges more carefully, I came up with a scheme that uses whole of 0x300000000 range for MEM and the smaller 32M range at 0xf4000000 for I/O and config, and tested it again to work for both: - nvme without a switch (directly connected) - nvme+wifi cards with a switch See v2 for more details. That ranges setup is closer to what BSP does. Kind regards, o. > Very Respectfully, > Peter Geis > > > resets = <&cru SRST_PCIE20_POWERUP>; > > reset-names = "pipe"; > > #address-cells = <3>; > > -- > > 2.37.3 > >