On Tue, Mar 26, 2024 at 12:05:42PM +0100, Niklas Cassel wrote: > On Tue, Mar 26, 2024 at 01:56:36PM +0530, Manivannan Sadhasivam wrote: > > On Fri, Mar 22, 2024 at 05:10:06PM +0100, Niklas Cassel wrote: > > > On Thu, Mar 14, 2024 at 08:53:44PM +0530, Manivannan Sadhasivam wrote: > > > > To maintain uniformity across EPF drivers, let's move the DMA > > > > initialization to EPC init callback. This will also allow us to deinit DMA > > > > during PERST# assert in the further commits. > > > > > > > > For EPC drivers without PERST#, DMA deinit will only happen during driver > > > > unbind. > > > > > > > > Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@xxxxxxxxxx> > > > > --- > > > > > > Reviewed-by: Niklas Cassel <cassel@xxxxxxxxxx> > > > > > > > > > For the record, I was debugging a problem related to EPF DMA recently > > > and was dumping the DMA mask for the struct device of the epf driver. > > > I was a bit confused to see it as 32-bits, even though the EPC driver > > > has it set to 64-bits. > > > > > > The current code works, because e.g., pci_epf_test_write(), etc, > > > does: > > > struct device *dma_dev = epf->epc->dev.parent; > > > dma_map_single(dma_dev, ...); > > > > > > but it also means that all EPF drivers will do this uglyness. > > > > > > > This ugliness is required as long as the dmaengine is associated only with the > > EPC. > > > > > > > > > > > However, if a EPF driver does e.g. > > > dma_alloc_coherent(), and sends in the struct *device for the EPF, > > > which is the most logical thing to do IMO, it will use the wrong DMA > > > mask. > > > > > > Perhaps EPF or EPC code should make sure that the struct *device > > > for the EPF will get the same DMA mask as epf->epc->dev.parent, > > > so that EPF driver developer can use the struct *epf when calling > > > e.g. dma_alloc_coherent(). > > > > > > > Makes sense. I think it can be done during bind() in the EPC core. Feel free to > > submit a patch if you like, otherwise I'll keep it in my todo list. > > So we still want to test: > -DMA API using the eDMA > -DMA API using the "dummy" memcpy dma-channel. > IMO, the test driver should just test one form of data transfer. Either CPU memcpy (using iATU or something similar) or DMA. But I think the motive behind using DMA memcpy is that to support platforms that do not pass DMA slave channels in devicetree. It is applicable to test driver but not to MHI driver since all DMA supported MHI platforms will pass the DMA slave channels in devicetree. > However, it seems like both pci-epf-mhi.c and pci-epf-test.c > do either: > -Use DMA API > or > -Use memcpy_fromio()/memcpy_toio() instead of DMA API > > > To me, it seems like we should always be able to use > DMA API (using either a eDMA or "dummy" memcpy). > No, there are platforms that don't support DMA at all. Like Qcom SDX55, so we still need to do CPU memcpy. > I don't really see the need to have the path that does: > memcpy_fromio()/memcpy_toio(). > > I know that for DWC, when using memcpy (and this also > memcpy via DMA API), we need to map the address using > iATU first. > > But that could probably be done using another flag, > perhaps rename that flag FLAG_USE_DMA to NEEDS_MAP or > something. > (Such that we can change these drivers to only have a > code path that uses DMA API.) > (...and making sure that inheriting the DMA mask does > not affect the DMA mask for DMA_MEMCPY.) > > But perhaps I am missing something... and DMA_MEMCPY is > not always available? > Yes. - Mani -- மணிவண்ணன் சதாசிவம்