> On 09.05.2015, at 00:13, Mark Brown <broonie@xxxxxxxxxx> wrote: > > On Fri, May 08, 2015 at 10:02:44PM +0200, Martin Sperl wrote: > >> Based on the findings by Noralf, the following solves the issue: > > Can you please post this upstream as a proper submission so I can review > and apply it (I'm not going to look at it properly right now, it's just > gone 11pm, but at first glance it doesn't ring any alarm bells)? > >> Note that I am not sure about the locking, so I left unmap outside the >> lock for now. > > It needs to be, you can't do anything big like unmapping in a > non-sleeping context. Well, right now it is not in a locking context but outside, so I wonder if it is really needed, but then for spi_finalize_current_message it is not specified if it can also get run from interrupt context or not. I will do something quick now, but I guess we should revisit the SPI_MASTER_MUST_RX/TX situation as there are some side-effects that may not be intended or performance friendly like: * dummy_RX/TX get allocated/released even when can_dma returns false should only run this “mapping if necessary. I guess that it is mostly usefull for DMA transfers - PIO drivers typically implement this via a simple if when tx/rx_buf is NULL. * dummy_tx/rx get allocated/released for every spi_message, which is wasting CPU * we allocate spi_transfer->len bytes to this dummy buffer and with a spi-transfer of 16MB this would try to allocate another 16MB - and possibly fail... Here maybe a fixed allocated (and mapped) buffer of master->max_dma_len could be helpfull that would get reused multiple times in the scatter/gather list to minimize the memory footprint. Ideally there would be support for this inside dma-engine in the form of a flag when calling dmaengine_prep_slave_sg to indicate that data comes from /dev/zero or goes to /dev/null, as some silicon dma-engines (like bcm2835) supports such types of transfers in hardware. Martin-- To unsubscribe from this list: send the line "unsubscribe linux-spi" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html