Re: [PATCH] spi: dw: Fix wrong FIFO level setting for long xfers

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jan 13, 2023 at 07:33:16PM +0200, Andy Shevchenko wrote:
> On Fri, Jan 13, 2023 at 6:57 PM Serge Semin
> <Sergey.Semin@xxxxxxxxxxxxxxxxxxxx> wrote:
> >
> > Due to using the u16 type in the min_t() macros the SPI transfer length
> > will be cast to word before participating in the conditional statement
> > implied by the macro. Thus if the transfer length is greater than 64KB the
> > Tx/Rx FIFO threshold level value will be determined by the leftover of the
> > truncated after the type-case length. In the worst case it will cause
> > having the "Tx FIFO Empty" or "Rx FIFO Full" interrupts triggered on each
> > word sent/received to/from the bus. In its turn it will cause the
> > dramatical performance drop.
> >
> > The problem can be easily fixed by using the min() macros instead of
> > min_t() which doesn't imply any type casting thus preventing the possible
> > data loss.
> 

> But this would be problematic if the types of the parameters are different.
> Currently they are u32 vs. unsigned int.

Yes, it would but only in case if somebody changes their types. As you
said they are currently of u32 and unsigned int types which are the
same on all the currently supported platforms. So even if somebody
changes the type of any of them then the compiler will warn about it
anyway.

> I would rather assume that
> FIFO length is always less than or equal to 64K and just change the
> type in min_t to follow what dws->tx_len is.

There is no need in assuming in this case. FIFO depth doesn't exceed
256 xfer words by the DW SSI IP-core design (judging by the constraints
applied to the SSI_RX_FIFO_DEPTH and SSI_TX_FIFO_DEPTH synthesize
parameters). So the dws->fifo_len can be easily converted to u16 type.
The problem is in the tx_len field casting to u16. It's a rare case,
but the SPI xfers length can be greater than 64K. The
spi_transfer.len field is of the unsigned int type and the SPI-core
doesn't have any constraints to that (except the one defined by the
controller drivers).

So to make sure I correctly understand what you meant. Do you suggest
to do something like this (it was my first version of the fix):
-	level = min_t(u16, dws->fifo_len / 2, dws->tx_len);
+	level = min_t(u32, dws->fifo_len / 2, dws->tx_len);
or even like this
-	level = min_t(u16, dws->fifo_len / 2, dws->tx_len);
+	level = min_t(typeof(dws->tx_len), dws->fifo_len / 2, dws->tx_len);
?

Personally I would prefer either my solution with just min() macros
usage (which in case of the types change will give the compile-time
warning about the types mismatch) or using the min_t(u32, ...) version
(using typeof() seems overkill). I don't see much different (do you?).
Both versions have their pros and cons.

-Serge(y)

> 
> -- 
> With Best Regards,
> Andy Shevchenko



[Index of Archives]     [Linux Kernel]     [Linux ARM (vger)]     [Linux ARM MSM]     [Linux Omap]     [Linux Arm]     [Linux Tegra]     [Fedora ARM]     [Linux for Samsung SOC]     [eCos]     [Linux Fastboot]     [Gcc Help]     [Git]     [DCCP]     [IETF Announce]     [Security]     [Linux MIPS]     [Yosemite Campsites]

  Powered by Linux