Hi Mark On 4/30/21 5:56 PM, Mark Brown wrote: > On Fri, Apr 30, 2021 at 04:22:34PM +0200, Patrice CHOTARD wrote: >> On 4/26/21 6:51 PM, Mark Brown wrote: >>> On Mon, Apr 26, 2021 at 09:56:12PM +0530, Pratyush Yadav wrote: > >>> Is it possible there's some situation where you're waiting for some bits >>> to clear as well? > >> Yes, we are waiting STATUS_BUSY bit to be cleared, see patch 2 which is making >> usage of this API. > > Then the inverse question applies - is there no circumstance where we > might be waiting for a bit to be set? > >>> We already have the core handling other timeouts. We don't pass around >>> completions but rather have an API function that the driver has to call >>> when the operation completes, a similar pattern might work here. Part > >> So, if i correctly understood, you make allusion to what is already done >> in SPI core framework with spi_finalize_current_transfer() right ? > > Yes, and _current_message(). > >>> of the thing with those APIs which I'm missing here is that this will >>> just return -EOPNOTSUPP if the driver can't do the delay in hardware, I >>> think it would be cleaner if this API were similar and the core dealt >>> with doing the delay/poll on the CPU. That way the users don't need to >>> repeat the handling for the offload/non-offload cases. > >> Sorry, i didn't catch what you mean here. In PATCH 2, that's the case, >> if spi_mem_poll_status() is not supported, the core is dealing with >> the delay/poll on the CPU in spinand_wait(). > > That's in the NAND core, not in spi-mem. Any other users of spi-mem > will also need to open code stuff. > Ok, got it, i will transfer what is done in spi_nand_wait() into spi_mem_poll_status() in order to get the full feature in spi-mem which will profit to all spi-mem users as requested. Thanks Patrice