On 12/11/13 11:46, Chuck Lever wrote: > > On Nov 12, 2013, at 11:24 AM, Steve Dickson <SteveD@xxxxxxxxxx> wrote: > >> >> >> On 12/11/13 11:09, Chuck Lever wrote: >>>> In the past, if admins want rpc.gssd in the mount path they had to configure it. >>>>> Now we are silently adding, yet another, daemon to the mount path and if >>>>> rpc.gssd starts falling on its face, I think it will be difficult to debug, >>>>> since the daemon is not expected to be there... >>> Our only real choice here is to fix gssd. Anything else is punting the problem down the road. >>> >> No. The last there was a daemon was involved in all NFS client mounts >> (at least that I can remember) was when lockd was a user level daemon. >> The main reason it was ported to the kernel was to get ride of the >> bottle neck it caused... Now we adding similar bottle neck back?? >> >> Architecturally, put a daemon in the direct NFS mount path just does >> not make sense... IMHO... > > Don't be ridiculous. rpc.gssd is ALREADY in the direct mount path for all Kerberos mounts, and has been for years. The key words being "Kerberos mounts".... > > Forget lease management security for a moment, and consider this: There is no possibility of moving forward with a secure NFS solution on Linux if we can't depend on rpc.gssd. Therefore, our only real choice if we want Kerberos to be a first class NFS feature on Linux is to make sure rpc.gssd works reliably. > > Last I checked, we are making a robust effort to harden Kerberos support for NFS. So I don't see any contradiction here. > > Now, specifically regarding when rpc.gssd is invoked for lease management security: it is invoked the first time each new server is contacted. If you mount the same server many times, there should be just one upcall. > > And, if auth_rpcgss.ko is not loaded, there will be no upcall. Ever. Perfect! steved. > -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html