Re: [PATCH 06/16] mmc: core: replace waitqueue with worker

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Mar 10, 2017 at 3:21 PM, Adrian Hunter <adrian.hunter@xxxxxxxxx> wrote:
> On 10/03/17 00:49, Linus Walleij wrote:
>> On Wed, Feb 22, 2017 at 2:29 PM, Adrian Hunter <adrian.hunter@xxxxxxxxx> wrote:
>>> On 09/02/17 17:33, Linus Walleij wrote:

>>>> This is a central change that let us do many other changes since
>>>> we have broken the submit and complete code paths in two, and we
>>>> can potentially remove the NULL flushing of the asynchronous
>>>> pipeline and report block requests as finished directly from
>>>> the worker.
>>>
>>> This needs more thought.  The completion should go straight to the mmc block
>>> driver from the ->done() callback.  And from there straight back to the
>>> block layer if recovery is not needed.  We want to stop using
>>> mmc_start_areq() altogether because we never want to wait - we always want
>>> to issue (if possible) and return.
>>
>> I don't quite follow this. Isn't what you request exactly what
>> patch 15/16 "mmc: queue: issue requests in massive parallel"
>> is doing?
>
> There is the latency for the worker that runs mmc_finalize_areq() and then
> another latency to wake up the worker that is running mmc_start_areq().
> That is 2 wake-ups instead of 1.

That is correct (though the measured effect is small).

However when we switch to MQ it must happen like this due to its asynchronous
nature of issuing requests to us.

Then we have MQ's submission thread coming in from one en and our worker
to manage retries and errors on the other side. We obviously cannot do
the retries and resends in MQs context as it blocks subsequent requests.

> As a side note, ideally we would be able to issue the next request from the
> interrupt or soft interrupt context of the completion (i.e. 0 wake-ups
> between requests), but we would probably have to look at the host API to
> support that.

I looked at that and couldn't find a good way to get to that point.

Mainly because of mmc_start_bkops() that may
arbitrarily fire after every command and start new requests to the
card, and that of course require a process context to happen. Then there
is the retune thing that I do not fully understand how it schedules, but
it might be fine since I'm under the impression that it is done at the
start of the next request if need be. Maybe both can be overcome by
quick checks in IRQ context and then this can be done. (I'm not smart enough
to see that yet, sorry.)

However since we activate the blocking context in MQ I don't know
if it can even deal with having requests completed in interrupt context
so that the next thing that happens after completing the request and
returning from the interrupt is that the block layer thread gets scheduled
(unless something more important is going on), I guess it is possible?
It looks like it could be really efficient.

>>> For the blk-mq port, the queue thread should also be retained, partly
>>> because it solves some synchronization problems, but mostly because, at this
>>> stage, we anyway don't have solutions for all the different ways the driver
>>> can block.
>>> (as listed here https://marc.info/?l=linux-mmc&m=148336571720463&w=2 )
>>
>> Essentially I take out that thread and replace it with this one worker
>> introduced in this very patch. I agree the driver can block in many ways
>> and that is why I need to have it running in process context, and this
>> is what the worker introduced here provides.
>
> The last time I looked at the blk-mq I/O scheduler code, it pulled up to
> qdepth requests from the I/O scheduler and left them on a local list while
> running ->queue_rq().  That means blocking in ->queue_rq() leaves some
> number of requests in limbo (not issued but also not in the I/O scheduler)
> for that time.

I think Jens provided a patch for this bug (don't see the patch
upstream though).

Yours,
Linus Walleij
--
To unsubscribe from this list: send the line "unsubscribe linux-mmc" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux USB Devel]     [Linux Media]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux