On 23-07-20, 03:58, Serge Semin wrote: > In the previous patchset I've written the next message: > > > Folks, note I've removed the next patches from the series: > > [PATCH v7 04/11] dmaengine: Introduce max SG list entries capability > > [PATCH v7 11/11] dmaengine: dw: Initialize max_sg_nents capability > > It turns out the problem with the asynchronous handling of Tx- and Rx- > > SPI transfers interrupts is more complex than I expected. So in order to > > solve the problem it isn't enough to split the SG list entries submission > > up based on the max_sg_nents capability setting (though the synchronous > > one-by-one SG list entries handling does fix a part of the problem). So > > if and when I get to find a comprehensive solution for it I'll submit a > > new series with fixups. Until then please consider to merge the patchset > > in without those patches. > > Those patches are returned back to the series. I've found a solution, which > fixes the problem for our hardware. A new patchset with several fixes for the > DW DMAC driver will be sent shortly after this one is merged in. Note the same > concerns the DW APB SPI driver. So please review and merge in as soon as > possible. > > Regarding the patchset. Baikal-T1 SoC has an DW DMAC on-board to provide a > Mem-to-Mem, low-speed peripherals Dev-to-Mem and Mem-to-Dev functionality. > Mostly it's compatible with currently implemented in the kernel DW DMAC > driver, but there are some peculiarities which must be taken into account > in order to have the device fully supported. > > First of all traditionally we replaced the legacy plain text-based dt-binding > file with yaml-based one. Secondly Baikal-T1 DW DMA Controller provides eight > channels, which alas have different max burst length configuration. > In particular first two channels may burst up to 128 bits (16 bytes) at a time > while the rest of them just up to 32 bits. We must make sure that the DMA > subsystem doesn't set values exceeding these limitations otherwise the > controller will hang up. In third currently we discovered the problem in using > the DW APB SPI driver together with DW DMAC. The problem happens if there is no > natively implemented multi-block LLP transfers support and the SPI-transfer > length exceeds the max lock size. In this case due to asynchronous handling of > Tx- and Rx- SPI transfers interrupt we might end up with DW APB SSI Rx FIFO > overflow. So if DW APB SSI (or any other DMAC service consumer) intends to use > the DMAC to asynchronously execute the transfers we'd have to at least warn > the user of the possible errors. In forth it's worth to set the DMA device max > segment size with max block size config specific to the DW DMA controller. It > shall help the DMA clients to create size-optimized SG-list items for the > controller. This in turn will cause less dw_desc allocations, less LLP > reinitializations, better DMA device performance. > > Finally there is a bug in the algorithm of the nollp flag detection. > In particular even if DW DMAC parameters state the multi-block transfers > support there is still HC_LLP (hardcode LLP) flag, which if set makes expected > by the driver true multi-block LLP functionality unusable. This happens cause' > if HC_LLP flag is set the LLP registers will be hardcoded to zero so the > contiguous multi-block transfers will be only supported. We must take the > flag into account when detecting the LLP support otherwise the driver just > won't work correctly. Applied all, thanks -- ~Vinod