On 11/11/13 13:53, Myklebust, Trond wrote: > > On Nov 11, 2013, at 13:43, Steve Dickson <SteveD@xxxxxxxxxx> wrote: > >> >> >> On 11/11/13 13:25, Myklebust, Trond wrote: >>> >>> On Nov 11, 2013, at 13:06, Steve Dickson <SteveD@xxxxxxxxxx> wrote: >>> >>>> >>>> >>>> On 09/11/13 18:12, Myklebust, Trond wrote: >>>>> One alternative to the above scheme, which I believe that I’ve >>>>> suggested before, is to have a permanent entry in rpc_pipefs >>>>> that rpc.gssd can open and that the kernel can use to detect >>>>> that it is running. If we make it /var/lib/nfs/rpc_pipefs/gssd/clnt00/gssd, >>>>> then AFAICS we don’t need to change nfs-utils at all, since all newer >>>>> versions of rpc.gssd will try to open for read anything of the form >>>>> /var/lib/nfs/rpc_pipefs/*/clntXX/gssd... >>>> >>>> After further review I am going going have to disagree with you on this. >>>> Since all the context is cached on the initial mount the kernel >>> >>> What context? >> The krb5 blob that the kernel is call up to rpc.gssd to get.. Maybe >> I'm using the wrong terminology??? > > That’s only the machine cred. User credentials get allocated and freed all the time. > > When the server reboots, then all GSS contexts need to be re-established, > which can be a lot of call_usermodehelper() upcalls; that’s one of the > reasons why we decided in favour of a gssd daemon in the first place. Just curious... Why is the call_usermodehelper() upcalls more expensive than the rpc_pipefs upcalls? steved. -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html