Re: Why do we use a global lock when manipulating MIDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, May 29, 2012 at 6:03 AM, Jeff Layton <jlayton@xxxxxxxxx> wrote:
> On Mon, 28 May 2012 22:03:44 -0500
> Steve French <smfrench@xxxxxxxxx> wrote:
>
>> On Mon, May 28, 2012 at 7:30 PM, Richard Sharpe
>> <realrichardsharpe@xxxxxxxxx> wrote:
>> > Hi folks,
>> >
>> > I am reading the code because I am interested in adding SMB2.2/3.0
>> > support around multi-connect.
>> >
>> > I noticed the following code, which confuses me.
>> >
>> >        /* put it on the pending_mid_q */
>> >        spin_lock(&GlobalMid_Lock);
>> >        list_add_tail(&mid->qhead, &server->pending_mid_q);
>> >        spin_unlock(&GlobalMid_Lock);
>> >
>> >        rc = cifs_sign_smb2(iov, nvec, server, &mid->sequence_number);
>> >        if (rc)
>> >                cifs_delete_mid(mid);
>> >        *ret_mid = mid;
>> >        return rc;
>> >
>> > Since MIDs are allocated on a per-server basis and the list that the
>> > new struct mid_q_entry is placed on is in the struct TCP_Server_Info
>> > why are we using GlobalMid_Lock. It seems that we could move that
>> > spinlock into struct TCP_Server_Info.
>> >
>> > What am I missing here?
>>
>> Yes - probably could be moved to the srv_mutex in struct
>> TCP_Server_Info although it is unlikely to make a measurable
>> difference.
>>
>
> Breaking up that global lock is long overdue, but let's not overload
> the srv_mutex with that -- mutexes have much higher overhead than a
> spinlock. A new per-TCP_Server_Info spinlock would be preferable.

Well, I wanted to talk about that. Take for example, this function:

int
cifs_call_async(struct TCP_Server_Info *server, struct kvec *iov,
                unsigned int nvec, mid_receive_t *receive,
                mid_callback_t *callback, void *cbdata, const int optype)
{
        int rc;
        struct mid_q_entry *mid;

        rc = wait_for_free_request(server, optype);
        if (rc)
                return rc;

        mutex_lock(&server->srv_mutex);
        rc = setup_async_request(server, iov, nvec, &mid);
        if (rc) {
                mutex_unlock(&server->srv_mutex);
                add_credits(server, 1, optype);
                wake_up(&server->request_q);
                return rc;
        }

        mid->receive = receive;
        mid->callback = callback;
        mid->callback_data = cbdata;
        mid->mid_state = MID_REQUEST_SUBMITTED;

        cifs_in_send_inc(server);
        rc = smb_sendv(server, iov, nvec);
        cifs_in_send_dec(server);
        cifs_save_when_sent(mid);
        mutex_unlock(&server->srv_mutex);

        if (rc)
                goto out_err;

        return rc;
out_err:
        cifs_delete_mid(mid);
        add_credits(server, 1, optype);
        wake_up(&server->request_q);
        return rc;
}

We already hold the server mutex and by shifting the MID-number lock
into the server structure we could eliminate that lock altogether. I
am sure that we could make the case that MIDs can only be allocated if
we are holding the server mutex. (Although, there is another issue
here, which is that the spec says that the PIDMID pair must be unique,
not that MID must be unique. However, I don't think that is a big
problem here. It will only be a problem when we have a big Linux
server with a large number of users accessing a CIFS share.)

The coding in the error paths here is quite contrived, as well, it
seems to me, for a number of reasons.

I would have expected to see the mutex being dropped in the error path
and just before the return of rc.

(there is the additional problem that cifs_delete_mid is deleted on
error paths in functions that are called as well that complicates the
error paths here, but I would prefer to see all errors handled by
branching to some point at the end of the function.) Using a
consistent idiom helps prevent errors creeping in.

-- 
Regards,
Richard Sharpe
(何以解憂?唯有杜康。--曹操)
--
To unsubscribe from this list: send the line "unsubscribe linux-cifs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux