On Wed, 2013-04-24 at 11:00 -0400, J. Bruce Fields wrote: > On Thu, Apr 18, 2013 at 10:25:49AM -0700, Chuck Lever wrote: > > > > On Apr 18, 2013, at 10:14 AM, "J. Bruce Fields" <bfields@xxxxxxxxxx> wrote: > > > > > On Thu, Apr 18, 2013 at 05:07:03PM +0000, Myklebust, Trond wrote: > > >> On Thu, 2013-04-18 at 13:00 -0400, J. Bruce Fields wrote: > > >>> On Mon, Apr 15, 2013 at 03:35:04PM -0400, J. Bruce Fields wrote: > > >>>> From: "J. Bruce Fields" <bfields@xxxxxxxxxx> > > >>>> > > >>>> In the gss-proxy case we don't want to have to reconnect at random--we > > >>>> want to connect only on gss-proxy startup when we can steal gss-proxy's > > >>>> context to do the connect in the right namespace. > > >>>> > > >>>> So, provide a flag that allows the rpc_create caller to turn off the > > >>>> idle timeout. > > >>> > > >>> Chuck, the basic ideas was your suggestion, does the executation look OK > > >>> here? I had to copy the rpc_create flags down to the xprt_create, I > > >>> don't know if that's reasonable. > > >> > > >> This patch will conflict with commit > > >> b7993cebb841b0da7a33e9d5ce301a9fd3209165 (SUNRPC: Allow rpc_create() to > > >> request that TCP slots be unlimited) that was posted on this list > > >> earlier this week. > > > > > > Oh, sorry, I missed that. > > > > > > Presumably then I should just work on top of that and do the same > > > thing--define a pair of flags > > > {RP_CLNT_CREATE|XPRT_CREATE}_NO_IDLE_TIMEOUT and translate between the > > > two in rpc_create. > > > > Agree. > > The result (untested) looks like this. > > If this is OK--Trond, do you mind if I merge this commit (or > nfs-for-next) into my tree, and then the rest of the gss-proxy patches > on top? > > Or is the nfs-for-next branch still potentially subject to rewriting? nfs-for-next is stable, so it should be safe to pull into your nfsd tree. -- Trond Myklebust Linux NFS client maintainer NetApp Trond.Myklebust@xxxxxxxxxx www.netapp.com -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html