Re: [PATCH 3/7] cifs: don't call mid_q_entry->callback under the Global_MidLock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 20 May 2011 13:32:10 -0500
Shirish Pargaonkar <shirishpargaonkar@xxxxxxxxx> wrote:

> On Thu, May 19, 2011 at 3:22 PM, Jeff Layton <jlayton@xxxxxxxxxx> wrote:
> > Holding the spinlock while we call this function means that it can't
> > sleep, which really limits what it can do. Taking it out from under
> > the spinlock also means less contention for this global lock.
> >
> > Change the semantics such that the Global_MidLock is not held when
> > the callback is called. To do this requires that we take extra care
> > not to have sync_mid_result remove the mid from the list when the
> > mid is in a state where that has already happened. This prevents
> > list corruption when the mid is sitting on a private list for
> > reconnect or when cifsd is coming down.
> >
> > Reviewed-by: Pavel Shilovsky <piastry@xxxxxxxxxxx>
> > Signed-off-by: Jeff Layton <jlayton@xxxxxxxxxx>
> > ---
> > Âfs/cifs/cifsglob.h Â| Â Â6 +++---
> > Âfs/cifs/connect.c  |  30 ++++++++++++++++++++++++------
> > Âfs/cifs/transport.c | Â 23 ++++++++---------------
> > Â3 files changed, 35 insertions(+), 24 deletions(-)
> >

[...]

> >
> > Â Â Â Â/* mark submitted MIDs for retry and issue callback */
> > - Â Â Â cFYI(1, "%s: issuing mid callbacks", __func__);
> > + Â Â Â INIT_LIST_HEAD(&retry_list);
> > + Â Â Â cFYI(1, "%s: moving mids to private list", __func__);
> > Â Â Â Âspin_lock(&GlobalMid_Lock);
> > Â Â Â Âlist_for_each_safe(tmp, tmp2, &server->pending_mid_q) {
> > Â Â Â Â Â Â Â Âmid_entry = list_entry(tmp, struct mid_q_entry, qhead);
> > Â Â Â Â Â Â Â Âif (mid_entry->midState == MID_REQUEST_SUBMITTED)
> > Â Â Â Â Â Â Â Â Â Â Â Âmid_entry->midState = MID_RETRY_NEEDED;
> > + Â Â Â Â Â Â Â list_move(&mid_entry->qhead, &retry_list);
> > + Â Â Â }
> > + Â Â Â spin_unlock(&GlobalMid_Lock);
> > +
> > + Â Â Â cFYI(1, "%s: moving mids to private list", __func__);
> 
> Does this comment repeat here?
> 

Oops, yes. I'll plan to fix it. If there are no other problems with the
patch, it might be easiest to commit this as-is and I'll do another
patch to fix it. Otherwise, I can respin if Steve prefers.

> > + Â Â Â list_for_each_safe(tmp, tmp2, &retry_list) {
> > + Â Â Â Â Â Â Â mid_entry = list_entry(tmp, struct mid_q_entry, qhead);
> > Â Â Â Â Â Â Â Âlist_del_init(&mid_entry->qhead);
> 
> And if we called list_move on this entry earlier, does this entry exist
> on the qhead list anymore i.e. did not list_move that care of list_del part?
> 

list_move moves it from the pending_mid_q to the retry_list.
list_del_init removes it from the list and then makes the list_head
point to itself (i.e. turns it into an empty list).

We have to do the initial list_move with the spinlock held since another
thread could be modifying the pending_mid_q. After they all get moved
to the retry list, we can walk that list without holding the spinlock
since it's private.

-- 
Jeff Layton <jlayton@xxxxxxxxxx>
--
To unsubscribe from this list: send the line "unsubscribe linux-cifs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux