Re: [PATCH V9 13/15] mmc: block: Add CQE and blk-mq support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 04/10/17 10:39, Linus Walleij wrote:
> On Fri, Sep 22, 2017 at 2:37 PM, Adrian Hunter <adrian.hunter@xxxxxxxxx> wrote:
> 
>> Add CQE support to the block driver, including:
>>     - optionally using DCMD for flush requests
>>     - "manually" issuing discard requests
>>     - issuing read / write requests to the CQE
>>     - supporting block-layer timeouts
>>     - handling recovery
>>     - supporting re-tuning
>>
>> CQE offers 25% - 50% better random multi-threaded I/O.  There is a slight
>> (e.g. 2%) drop in sequential read speed but no observable change to sequential
>> write.
>>
>> CQE automatically sends the commands to complete requests.  However it only
>> supports reads / writes and so-called "direct commands" (DCMD).  Furthermore
>> DCMD is limited to one command at a time, but discards require 3 commands.
>> That makes issuing discards through CQE very awkward, but some CQE's don't
>> support DCMD anyway.  So for discards, the existing non-CQE approach is
>> taken, where the mmc core code issues the 3 commands one at a time i.e.
>> mmc_erase(). Where DCMD is used, is for issuing flushes.
>>
>> For host controllers without CQE support, blk-mq support is extended to
>> synchronous reads/writes or, if the host supports CAP_WAIT_WHILE_BUSY,
>> asynchonous reads/writes.  The advantage of asynchronous reads/writes is
>> that it allows the preparation of the next request while the current
>> request is in progress.
>>
>> Signed-off-by: Adrian Hunter <adrian.hunter@xxxxxxxxx>
> 
> I am trying to wrap my head around this large patch. The size makes it hard
> but I am doing my best.
> 
> Some overarching questions:
> 
> - Is the CQE only available on the MQ path (i.e. if you enabled MQ) or
>   on both paths?

Only MQ.

> 
> I think it is reasonable that if we introduce a new feature like this
> it will only
> be available for the new block path. This reflects how the block maintainers
> e.g. only allow new scheduling policies to be merged on the MQ path.
> The old block layer is legacy and should not be extended with new cool
> features that can then be regarded as "regressions" if they don't work
> properly with MQ. Better to only implement them for MQ then.

I don't agree that holding up features for years and years is an ideal outcome.

> 
> - Performance before/after path on MQ?

When the host controller supports asynchronous requests, it should be the
same, but as I wrote on the cover message, the block layer workqueue had
worse latency than a dedicated thread.  It seems that is related to it being
bound to a CPU.

> 
> I tested this very extensively when working with my (now dormant) MQ
> patch set. Better/equal/worse?
> (https://marc.info/?l=linux-mmc&m=148665788227015&w=2)
> 
> The reason my patch set contained refactorings of async post-processing,
> removed the waitqueues and the context info, up to the point where I
> can issue requests in parallel, i.e. complete requests from the ->done()
> callback on the host and immediately let the core issue the next one,
> was due to performance issues.

Asynchronous requests are completed in the ->done() callback.

It is not exactly accurate to say that requests can be issued in parallel.
eMMC/SD cards can only do one thing at a time.  Asynchronous requests
facilitate the driver to prepare the next request without first having to
wait for the previous request.

> 
> It's these patches from the old patch set:
> 
>   mmc: core: move some code in mmc_start_areq()
>   mmc: core: refactor asynchronous request finalization
>   mmc: core: refactor mmc_request_done()
>   mmc: core: move the asynchronous post-processing
>   mmc: core: add a kthread for completing requests
>   mmc: core: replace waitqueue with worker
>   mmc: core: do away with is_done_rcv
>   mmc: core: do away with is_new_req
>   mmc: core: kill off the context info
>   mmc: queue: simplify queue logic
>   mmc: block: shuffle retry and error handling
>   mmc: queue: stop flushing the pipeline with NULL
>   mmc: queue: issue struct mmc_queue_req items
>   mmc: queue: get/put struct mmc_queue_req
>   mmc: queue: issue requests in massive parallel
> 
> I.e. I made 15 patches just to make sure the new block layer did not
> regress performance. The MQ-switch patch was just the final step of
> these 16 patches.
> 
> Most energy went into that and I think it will be necessary still to work
> with MQ in the long haul.
> 
> I am worried that this could add a second MQ execution path that
> performs worse than the legacy block path for the above reason, i.e.
> it doesn't really take advantage of the MQ speedups by marshalling
> the requests and not doing away with the waitqueues and not
> completing the requests out-of-order, so we get stuck with a lump of
> MQ code that doesn't perform and therefore we cannot switch seamlessly
> to MQ.

The new patches have support for asynchronous requests and, consequently,
the preparation of the next request before waiting for the previous request.
 There is no need for another execution path.



[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux