Re: [PATCH v4 01/16] spi: dw: Add Tx/Rx finish wait methods to the MID DMA

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, May 22, 2020 at 04:22:41PM +0100, Mark Brown wrote:
> On Fri, May 22, 2020 at 05:45:42PM +0300, Serge Semin wrote:
> > On Fri, May 22, 2020 at 05:36:39PM +0300, Andy Shevchenko wrote:
> 
> > > My point is: let's warn and see if anybody comes with a bug report. We will
> > > solve an issue when it appears.
> 
> > In my environment the stack trace happened (strictly speaking it has been a
> > BUG() invoked due to the sleep_range() called within the tasklet) when SPI bus
> > had been enabled to work with !8MHz! clock. It's quite normal bus speed.
> > So we'll get the bug report pretty soon.)
> 
> Right, that definitely needs to be fixed then - 8MHz is indeed a totally
> normal clock rate for SPI so people will hit it.  I guess if there's a
> noticable performance hit to defer to thread then we could implement
> both and look at how long the delay is going to be to decide which to
> use, that's annoyingly complicated though so if the overhead is small
> enough we could just not bother.

As I suggested before we can implement a solution without performance drop.
Just wait for the DMA completion locally in the dw_spi_dma_transfer() method and
return 0 instead of 1 from the transfer_one() callback. In that function we'll
wait while DMA finishes its business, after that we can check the Tx/Rx FIFO
emptiness and wait for the data to be completely transferred with delays or
sleeps or whatever.

There are several drawback of the solution:
1) We need to alter the dw_spi_transfer_one() method in a way one would return
0 instead of 1 (for DMA) so the generic spi_transfer_one_message() method would
be aware that the transfer has been finished and it doesn't need to wait by
calling the spi_transfer_wait() method.
2) Locally in the dw_spi_dma_transfer() I have to implement a method similar
to the spi_transfer_wait(). It won't be that similar though. We can predict a
completion timeout better in here due to using a more exact SPI bus frequency.
Anyway in the rest of aspects the functions will be nearly the same. 
3) Not using spi_transfer_wait() means we also have to locally add the SPI
timeout statistics incremental.

So to speak the local wait method will be like this:

+static int dw_spi_dma_wait(struct dw_spi *dws, struct spi_transfer *xfer)
+{
+ 	struct spi_statistics *statm = &dws->master->statistics;
+	struct spi_statistics *stats = &dws->master->cur_msg->spi->statistics;
+	unsigned long ms = 1;
+
+	ms = xfer->len * MSEC_PER_SEC * BITS_PER_BYTE;
+	ms /= xfer->effective_speed_hz;
+	ms += ms + 200;
+
+	ms = wait_for_completion_timeout(&dws->xfer_completion,
+					msecs_to_jiffies(ms));
+
+	if (ms == 0) {
+		SPI_STATISTICS_INCREMENT_FIELD(statm, timedout);
+		SPI_STATISTICS_INCREMENT_FIELD(stats, timedout);
+		dev_err(&dws->master->cur_msg->spi->dev,
+			"SPI transfer timed out\n");
+			return -ETIMEDOUT;
+	}
+}

NOTE Currently the DW APB SSI driver doesn't set xfer->effective_speed_hz, though as
far as I can see that field exists there to be initialized by the SPI controller
driver, right? If so, strange it isn't done in any SPI drivers...

Then we can use that method to wait for the DMA transfers completion:

+static int dw_spi_dma_transfer(struct dw_spi *dws, struct spi_transfer *xfer)
+{
+	...
+	/* DMA channels/buffers preparation and the transfers execution */
+	...
+
+	ret = dw_spi_dma_wait(dws, xfer);
+	if (ret)
+		return ret;
+
+	ret = dw_spi_dma_wait_tx_done(dws);
+	if (ret)
+		return ret;
+
+	ret = dw_spi_dma_wait_rx_done(dws);
+	if (ret)
+		return ret;
+
+	return 0;
+}

What do think about this?

If you don't mind I'll send this fixup separately from the patchset we discuss
here, since it's going to be a series of patches. What would be better for you:
implement it based on the current DW APB SSI driver, or on top of this
patchset "spi: dw: Add generic DW DMA controller support" (it's being under
review in this email thread) ? Anyway, if the fixup is getting to be that
complicated, will it have to be backported to another stable kernels?

-Sergey



[Index of Archives]     [Device Tree Compilter]     [Device Tree Spec]     [Linux Driver Backports]     [Video for Linux]     [Linux USB Devel]     [Linux PCI Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Yosemite Backpacking]


  Powered by Linux