Re: [PATCH 0/5] dmaengine: dw: Take Baikal-T1 SoC DW DMAC peculiarities into account

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 06-03-20, 15:30, Andy Shevchenko wrote:
> On Fri, Mar 06, 2020 at 03:29:12PM +0200, Andy Shevchenko wrote:
> > On Fri, Mar 06, 2020 at 04:10:29PM +0300, Sergey.Semin@xxxxxxxxxxxxxxxxxxxx wrote:
> > > From: Serge Semin <fancer.lancer@xxxxxxxxx>
> > > 
> > > Baikal-T1 SoC has an DW DMAC on-board to provide a Mem-to-Mem, low-speed
> > > peripherals Dev-to-Mem and Mem-to-Dev functionality. Mostly it's compatible
> > > with currently implemented in the kernel DW DMAC driver, but there are some
> > > peculiarities which must be taken into account in order to have the device
> > > fully supported.
> > > 
> > > First of all traditionally we replaced the legacy plain text-based dt-binding
> > > file with yaml-based one. Secondly Baikal-T1 DW DMA Controller provides eight
> > > channels, which alas have different max burst length configuration.
> > > In particular first two channels may burst up to 128 bits (16 bytes) at a time
> > > while the rest of them just up to 32 bits. We must make sure that the DMA
> > > subsystem doesn't set values exceeding these limitations otherwise the
> > > controller will hang up. In third currently we discovered the problem in using
> > > the DW APB SPI driver together with DW DMAC. The problem happens if there is no
> > > natively implemented multi-block LLP transfers support and the SPI-transfer
> > > length exceeds the max lock size. In this case due to asynchronous handling of
> > > Tx- and Rx- SPI transfers interrupt we might end up with Dw APB SSI Rx FIFO
> > > overflow. So if DW APB SSI (or any other DMAC service consumer) intends to use
> > > the DMAC to asynchronously execute the transfers we'd have to at least warn
> > > the user of the possible errors.
> > > 
> > > Finally there is a bug in the algorithm of the nollp flag detection.
> > > In particular even if DW DMAC parameters state the multi-block transfers
> > > support there is still HC_LLP (hardcode LLP) flag, which if set makes expected
> > > by the driver true multi-block LLP functionality unusable. This happens cause'
> > > if HC_LLP flag is set the LLP registers will be hardcoded to zero so the
> > > contiguous multi-block transfers will be only supported. We must take the
> > > flag into account when detecting the LLP support otherwise the driver just
> > > won't work correctly.
> > > 
> > > This patchset is rebased and tested on the mainline Linux kernel 5.6-rc4:
> > > commit 98d54f81e36b ("Linux 5.6-rc4").
> > 
> > Thank you for your series!
> > 
> > I'll definitely review it, but it will take time. So, I think due to late
> > submission this is material at least for v5.8.
> 
> One thing that I can tell immediately is the broken email thread in this series.
> Whenever you do a series, use `git format-patch --cover-letter --thread ...`,
> so, it will link the mail properly.

And all the dmaengine specific patches should be sent to dmaengine list,
I see only few of them on the list.. that confuses tools like
patchwork..

Pls fix these and resubmit

Thanks
-- 
~Vinod



[Index of Archives]     [Device Tree Compilter]     [Device Tree Spec]     [Linux Driver Backports]     [Video for Linux]     [Linux USB Devel]     [Linux PCI Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Yosemite Backpacking]


  Powered by Linux