RE: [PATCH v2 40/47] nfsd41: cb_sequence callback

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> -----Original Message-----
> From: J. Bruce Fields [mailto:bfields@xxxxxxxxxxxx]
> Sent: Thursday, April 02, 2009 1:54 PM
> To: Benny Halevy
> Cc: linux-nfs@xxxxxxxxxxxxxxx; pnfs@xxxxxxxxxxxxx; Labiaga, Ricardo
> Subject: Re: [PATCH v2 40/47] nfsd41: cb_sequence callback
> 
> On Thu, Apr 02, 2009 at 10:34:55PM +0300, Benny Halevy wrote:
> > On Apr. 02, 2009, 22:27 +0300, "J. Bruce Fields"
<bfields@xxxxxxxxxxxx>
> wrote:
> > > On Thu, Apr 02, 2009 at 10:20:08PM +0300, Benny Halevy wrote:
> > >> On Apr. 02, 2009, 21:51 +0300, "J. Bruce Fields"
> <bfields@xxxxxxxxxxxx> wrote:
> > >>> On Thu, Apr 02, 2009 at 11:47:24AM +0300, Benny Halevy wrote:
> > >>>> On Apr. 01, 2009, 7:39 +0300, "J. Bruce Fields"
> <bfields@xxxxxxxxxxxx> wrote:
> > >>>>>> @@ -503,6 +589,38 @@ nfsd4_probe_callback(struct nfs4_client
> *clp)
> > >>>>>>  	return;
> > >>>>>>  }
> > >>>>>>
> > >>>>>> +#if defined(CONFIG_NFSD_V4_1)
> > >>>>>> +/*
> > >>>>>> + * FIXME: cb_sequence should support referring call lists,
> cachethis, and
> > >>>>>> + * multiple slots
> > >>>>>> + */
> > >>>>>> +static int
> > >>>>>> +nfs41_cb_sequence_setup(struct nfs4_client *clp, struct
> nfsd4_cb_sequence *args)
> > >>>>>> +{
> > >>>>>> +	u32 *ptr = (u32 *)clp->cl_sessionid.data;
> > >>>>>> +
> > >>>>>> +	dprintk("%s: %u:%u:%u:%u\n", __func__,
> > >>>>>> +		ptr[0], ptr[1], ptr[2], ptr[3]);
> > >>>>>> +
> > >>>>>> +	mutex_lock(&clp->cl_cb_mutex);
> > >>>>> We shouldn't be holding a mutex across a callback.  Why is
this
> needed?
> > >>>> Just a simple way to limit concurrency to 1 and not deal with
> > >>>> multiple slots on the callback path.
> > >>> OK.  We should work out some sort of asynchronous equivalent.
> > >> OK.
> > >>
> > >>> How is the client side currently waiting on slots?
> > >> Currently it has a single slot and a single
> > >> (NFS41_BC_MIN_CALLBACKS) preallocated rpc_rqst.
> > >> If it gets more calls than that, concurrently, it will
> > >> drop its end of the line using
> > >> set_bit(XPRT_CLOSE_WAIT, &xprt->state)
> > >>
> > >> (Ricardo, please keep me honest :)
> > >
> > > I was wondering about the rpc client as used on the nfs client,
not
> the
> >
> > you mean rpc client as used on the nfs server? :)
> 
> I meant the rpc client on the nfs client: when the nfs client sends an
> rpc on the forechannel it has to wait for a session slot.  How does it
> do it (is it putting the rpc task on some queue?), and can the
callback
> client (on the nfs server) do the same thing?
> 

Yes, the NFS client uses a slot table for the forechannel 'struct
nfs4_slot_table' and a slot table for the backchannel.  Tasks sleep on
an rpc_wait_queue if there are no available slots.  Used/ unused slots
are tracked with a bit map array.  When the reply is received on a slot,
the next available task is awaken.

Yes, the callback client can certainly do the same thing.  Today, the
Linux client backchannel only advertises a single slot (need to check
what Solaris does).  So against Linux, having more than one slot doesn't
buy the server much right now.

Is this something that can be addressed as an enhancement later on, or
do you need this implemented right away?

- ricardo
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux