On Fri, 20 May 2011 13:32:10 -0500 Shirish Pargaonkar <shirishpargaonkar@xxxxxxxxx> wrote: > On Thu, May 19, 2011 at 3:22 PM, Jeff Layton <jlayton@xxxxxxxxxx> wrote: > > Holding the spinlock while we call this function means that it can't > > sleep, which really limits what it can do. Taking it out from under > > the spinlock also means less contention for this global lock. > > > > Change the semantics such that the Global_MidLock is not held when > > the callback is called. To do this requires that we take extra care > > not to have sync_mid_result remove the mid from the list when the > > mid is in a state where that has already happened. This prevents > > list corruption when the mid is sitting on a private list for > > reconnect or when cifsd is coming down. > > > > Reviewed-by: Pavel Shilovsky <piastry@xxxxxxxxxxx> > > Signed-off-by: Jeff Layton <jlayton@xxxxxxxxxx> > > --- > > Âfs/cifs/cifsglob.h Â|  Â6 +++--- > > Âfs/cifs/connect.c  |  30 ++++++++++++++++++++++++------ > > Âfs/cifs/transport.c |  23 ++++++++--------------- > > Â3 files changed, 35 insertions(+), 24 deletions(-) > > [...] > > > >    Â/* mark submitted MIDs for retry and issue callback */ > > -    cFYI(1, "%s: issuing mid callbacks", __func__); > > +    INIT_LIST_HEAD(&retry_list); > > +    cFYI(1, "%s: moving mids to private list", __func__); > >    Âspin_lock(&GlobalMid_Lock); > >    Âlist_for_each_safe(tmp, tmp2, &server->pending_mid_q) { > >        Âmid_entry = list_entry(tmp, struct mid_q_entry, qhead); > >        Âif (mid_entry->midState == MID_REQUEST_SUBMITTED) > >            Âmid_entry->midState = MID_RETRY_NEEDED; > > +        list_move(&mid_entry->qhead, &retry_list); > > +    } > > +    spin_unlock(&GlobalMid_Lock); > > + > > +    cFYI(1, "%s: moving mids to private list", __func__); > > Does this comment repeat here? > Oops, yes. I'll plan to fix it. If there are no other problems with the patch, it might be easiest to commit this as-is and I'll do another patch to fix it. Otherwise, I can respin if Steve prefers. > > +    list_for_each_safe(tmp, tmp2, &retry_list) { > > +        mid_entry = list_entry(tmp, struct mid_q_entry, qhead); > >        Âlist_del_init(&mid_entry->qhead); > > And if we called list_move on this entry earlier, does this entry exist > on the qhead list anymore i.e. did not list_move that care of list_del part? > list_move moves it from the pending_mid_q to the retry_list. list_del_init removes it from the list and then makes the list_head point to itself (i.e. turns it into an empty list). We have to do the initial list_move with the spinlock held since another thread could be modifying the pending_mid_q. After they all get moved to the retry list, we can walk that list without holding the spinlock since it's private. -- Jeff Layton <jlayton@xxxxxxxxxx> -- To unsubscribe from this list: send the line "unsubscribe linux-cifs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html