RE: Why does cdns_pcie_ep_set_bar use sz > SZ_2G for is_64bits in pcie-cadence-ep.c?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Li,

> -----Original Message-----
> From: Li Chen <lchen@xxxxxxxxxxxxx>
> Sent: 21 January 2022 02:56
> To: Tom Joseph <tjoseph@xxxxxxxxxxx>
> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@xxxxxxx>; Rob Herring
> <robh@xxxxxxxxxx>; Krzysztof Wilczyński <kw@xxxxxxxxx>; Bjorn Helgaas
> <bhelgaas@xxxxxxxxxx>; linux-pci@xxxxxxxxxxxxxxx; linux-
> kernel@xxxxxxxxxxxxxxx
> Subject: RE: Why does cdns_pcie_ep_set_bar use sz > SZ_2G for is_64bits in
> pcie-cadence-ep.c?
> 
> EXTERNAL MAIL
> 
> 
> Hi, Tom
> 
> > -----Original Message-----
> > From: Tom Joseph [mailto:tjoseph@xxxxxxxxxxx]
> > Sent: Thursday, January 20, 2022 9:11 PM
> > To: Li Chen
> > Cc: Lorenzo Pieralisi; Rob Herring; Krzysztof Wilczyński; Bjorn Helgaas; linux-
> > pci@xxxxxxxxxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx
> > Subject: [EXT] RE: Why does cdns_pcie_ep_set_bar use sz > SZ_2G for
> is_64bits
> > in pcie-cadence-ep.c?
> >
> > Hi Li,
> >
> >  For 64_bits ,  all the odd bars (BAR1, 3 ,5) will be disabled ( so as to use as
> upper
> > bits).
> 
> Yes, I get it.
> > I see that the code is assuming 32_bits if size < 2G , so all bars could be
> enabled.
> >
> 
> Ok, but I still wonder why 2G instead of other sizes like 1G or 512M? IMO if
> there is no obvious limitation, 64 or 32 bit should be left to the user to
> decide(like bar_fixed_64bit and bar_fixed_size in pci_epc_features), instead
> of hardcode 2G here.

Check for 2G is important as this is the max valid aperture size encoding (0x11000)
allowed by the controller for 32 bit BARs.

> 
> > As I understand, you have a use case where you want to set the bar as 64
> bit,
> > actually use small size.
> > Is it possible to describe bit more about this use case (just curious)?
> 
> It's because our SoC use 0-64G AMBA address space for our dram(system
> memory), so if I want to use 32 bit bar like 16M bus address, I must reserve
> this 16M area with kernel's reserve-memory, otherwise endpoints like nvme
> will report unsupported request when it do dma and the dma address is also
> located under this 16M area. More details have been put in this thread:
> https://urldefense.com/v3/__https://lore.kernel.org/lkml/CH2PR19MB4024
> 5BF88CF2F7210DCB1AA4A0669@xxxxxxxxxxxxxxxxxxxxxxxxxxx.outlook.c
> om/T/*m0dd09b7e6f868b9692185ec57c1986b3c786e8d3__;Iw!!EHscmS1ygiU
> 1lA!SVgmPO0MrmUUnZzlmc-fCPsSBddkfUgT-
> Y7EmlLAgz9AoQSVZXa_18TIdT6O7kY$
> 
> 
> So, if I don't want to reserve much memory for BAR, I have to use 64-bit bar,
> and it must be prefetch 64 bit, not non-prefetch in my case, because my
> virtual p2p bridge has three windows: io, mem(32bit), prefetch mem(64 bit,
> because CDNS_PCIE_LM_RC_BAR_CFG_PREFETCH_MEM_64BITS is set), and
> if the controller running under ep-mode use 64 non-prefetch, this region will
> fallback to bridge's 32-bit mem window but I don't(and cannot) reserve so
> much 32bit memory for it).
> 
Got your point. Does a change in the code as below will be good enough ?

- bool is_64bits = sz > SZ_2G; 
+bool is_64bits = (sz > SZ_2G) ||(!!( flags &  PCI_BASE_ADDRESS_MEM_TYPE_64)) ;


Thanks,
Tom




[Index of Archives]     [DMA Engine]     [Linux Coverity]     [Linux USB]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Greybus]

  Powered by Linux