Re: [PATCH] PCI: tegra: Do not allocate MSI target memory

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am Samstag, den 02.03.2019, 08:20 +0530 schrieb Vidya Sagar:
> On 3/1/2019 8:26 PM, Lucas Stach wrote:
> > Am Freitag, den 01.03.2019, 08:45 +0530 schrieb Vidya Sagar:
> > > On 3/1/2019 12:32 AM, Lucas Stach wrote:
> > > > Am Donnerstag, den 28.02.2019, 20:30 +0530 schrieb Vidya Sagar:
> > > > > The PCI host bridge found on Tegra SoCs doesn't require the MSI target
> > > > > address to be backed by physical system memory. Writes are intercepted
> > > > > within the controller and never make it to the memory pointed to.
> > > > > 
> > > > > Since no actual system memory is required, remove the allocation of a
> > > > > single page and hardcode the MSI target address with a special address
> > > > > on a per-SoC basis. Ideally this would be an address to an MMIO memory
> > > > > region (such as where the controller's register are located). However,
> > > > > those addresses don't work reliably across all Tegra generations. The
> > > > > only set of addresses that work consistently are those that point to
> > > > > external memory.
> > > > > 
> > > > > This is not ideal, since those addresses could technically be used for
> > > > > DMA and hence be confusing. However, the first page of external memory
> > > > > is unlikely to be used and special enough to avoid confusion.
> > > > So you are trading a slight memory waste of a single page against a
> > > > sporadic (and probably hard to debug) DMA failure if any device happens
> > > > to initiate DMA to the first page of physical memory? That does not
> > > > sound like a good deal...
> > > > 
> > > > Also why would the first page of external memory be unlikely to be
> > > > used?
> > > > 
> > > > Regards,
> > > > Lucas
> > > We are not wasting single page of memory here and if any device's DMA is
> > > trying to access it, it will still go through. Its just that we are using that
> > > same address for MSI (note that MSI writes don't go beyond PCIe IP as they get
> > > decoded at PCIe IP level itself and only an interrupt
> > > goes to CPU) and that might be a bit confusing as same address is used
> > > as normal memory as well as MSI target address. Since there can never be any
> > > issue with this would you suggest to remove the last paragraph from commit
> > > description?
> > How does the core distinguish between a normal DMA memory write and a
> > MSI? If I remember the PCIe spec correctly, there aren't any
> > differences between the two besides the target address.
> > 
> > So if you now set a non-reserved region of memory to decode as a MSI at
> > the PCIe host controller level, wouldn't this lead to normal DMA
> > transactions to this address being wrongfully turned into an MSI and
> > the write not reaching the targeted location?
> > 
> > Regards,
> > Lucas
> 
> You are correct that core cannot distinguish between a normal DMA memory and
> MSI. In that case, the only way I see is to alloc memory using 
> dma_alloc_coherent()
> and use the IOVA as the MSI target address. That way, a page gets 
> reserved (in a way wasted
> also as the MSI writes don't really make it to RAM) and there won't be 
> any address
> overlaps with normal DMA writes. I'll push a patch for it.

At that point it's no longer different from the current code, which
simply reserves a single page of memory and uses its address as the MSI
target address.

So in conclusion this change should just be dropped.

Regards,
Lucas




[Index of Archives]     [ARM Kernel]     [Linux ARM]     [Linux ARM MSM]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux