Re: [PATCH 2/2] mmc: rtsx: add support for async request

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 06/18/2014 07:03 PM, Ulf Hansson wrote:
On 18 June 2014 12:08, micky <micky_ching@xxxxxxxxxxxxxx> wrote:
On 06/18/2014 03:39 PM, Ulf Hansson wrote:
On 18 June 2014 03:17, micky <micky_ching@xxxxxxxxxxxxxx> wrote:
On 06/17/2014 03:45 PM, Ulf Hansson wrote:
On 17 June 2014 03:04, micky <micky_ching@xxxxxxxxxxxxxx> wrote:
On 06/16/2014 08:40 PM, Ulf Hansson wrote:
On 16 June 2014 11:09, micky <micky_ching@xxxxxxxxxxxxxx> wrote:
On 06/16/2014 04:42 PM, Ulf Hansson wrote:
@@ -36,7 +37,10 @@ struct realtek_pci_sdmmc {
            struct rtsx_pcr         *pcr;
            struct mmc_host         *mmc;
            struct mmc_request      *mrq;
+       struct workqueue_struct *workq;
+#define SDMMC_WORKQ_NAME       "rtsx_pci_sdmmc_workq"

+       struct work_struct      work;
I am trying to understand why you need a work/workqueue to implement
this feature. Is that really the case?

Could you elaborate on the reasons?
Hi Uffe,

we need return as fast as possible in mmc_host_ops
request(ops->request)
callback,
so the mmc core can continue handle next request.
when next request everything is ready, it will wait previous done(if
not
done),
then call ops->request().

we can't use atomic context, because we use mutex_lock() to protect
ops->request should never executed in atomic context. Is that your
concern?
Yes.
Okay. Unless I missed your point, I don't think you need the
work/workqueue.
any other method?

Because, ops->request isn't ever executed in atomic context. That's
due to the mmc core, which handles the async mechanism, are waiting
for a completion variable in process context, before it invokes the
ops->request() callback.

That completion variable will be kicked, from your host driver, when
you invoke mmc_request_done(), .
Sorry, I don't understand here, how kicked?
mmc_request_done()
      ->mrq->done()
          ->mmc_wait_done()
              ->complete(&mrq->completion);

I think the flow is:
- not wait for first req
- init mrq->done
- ops->request()                         ---         A.rtsx: start queue
work.
- continue fetch next req
- prepare next req ok,
- wait previous done.                --- B.(mmc_request_done() may be
called
at any time from A to B)
- init mrq->done
- ops->request()                         ---         C.rtsx: start queue
next work.
...
and seems no problem.
Right, I don't think there are any _problem_ by using the workqueue as
you have implemented, but I am questioning if it's correct. Simply
because I don't think there are any reasons to why you need a
workqueue, it doesn't solve any problem for you - it just adds
overhead.
Hi Uffe,

we have two driver under mfd, the rtsx-mmc and rtsx-ms,
we use mutex lock(pcr_mutex) to protect resource,
when we handle mmc request, we need hold the mutex until we finish the
request,
so it will not interruptted by rtsx-ms request.
Ahh, I see. Now, _that_ explains why you want the workqueue. :-) Thanks!

If we not use workq, once the request hold the mutex, we have to wait until
the request finish,
then release mutex, so the mmc core is blocking at here.
To implement nonblocking request, we have to use workq.
One minor suggestion below, please consider this as an optimization
which goes outside the context of this patch.

There are cases when I think you should be able to skip the overhead
from scheduling the work from ->request(). Those cases can be
described as when the mutex are available which can be tested by using
mutex_trylock().
Thanks for your suggestion.

we need schedule the work every time mmc core call ops->request(),
if we want to handle request, we need hold mutex and do the work.
so mutex_trylock() will not help decrease overhead.
if we not schedule the work, the ops->request will do nothing.

Best Regards.
micky
Kind regards
Uffe
.


_______________________________________________
devel mailing list
devel@xxxxxxxxxxxxxxxxxxxxxx
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel




[Index of Archives]     [Linux Driver Backports]     [DMA Engine]     [Linux GPIO]     [Linux SPI]     [Video for Linux]     [Linux USB Devel]     [Linux Coverity]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Yosemite Backpacking]
  Powered by Linux