On Thu, 2019-04-25 at 12:45 -0400, bfields@xxxxxxxxxxxx wrote: > On Thu, Apr 25, 2019 at 04:40:18PM +0000, Trond Myklebust wrote: > > On Thu, 2019-04-25 at 11:33 -0400, bfields@xxxxxxxxxxxx wrote: > > > On Thu, Apr 25, 2019 at 03:00:22PM +0000, Trond Myklebust wrote: > > > > The assumption is that if you have enough privileges to mount a > > > > filesystem using the NFS client, then you would also have > > > > enough > > > > privileges to run a userspace client, so there is little point > > > > in > > > > restricting the NFS client. > > > > > > > > So the guiding principle is that a NFS client mount that is > > > > started > > > > in > > > > a container should behave as if it were started by a process in > > > > a > > > > "real > > > > VM". That means that the root uid/gid in the container maps to > > > > a > > > > root > > > > uid/gid on the wire. > > > > Ditto, if there is a need to run the idmapper in the container, > > > > then > > > > the expectation is that processes running as 'user' with uid > > > > 'u', > > > > will > > > > see their creds mapped correctly by the idmapper. Again, that's > > > > what > > > > you would see if the process were running in a VM instead of a > > > > container. > > > > > > > > Does that all make sense? > > > > > > Yes, thanks! > > > > > > I thought there was a problem that the idmapper depended on > > > keyring usermodehelper calls that it was hard to pass namespace > > > information to. Did that get fixed and I missed it or or forgot? > > > > No, you are correct, and so we assume the container is using > > rpc.idmapd > > (and rpc.gssd) if it is running NFSv4 with RPCSEC_GSS. > > > > If the keyring upcall gets fixed, then we may allow it to be used > > in > > future kernels. > > OK, got it. Is there anything we lose by using > idmapd? (IDMAP_NAMESZ > might be a limitation?) IDMAP_NAMESZ should be set to 128 by default, so should work for most cases. I don't think there are any further limitations. -- Trond Myklebust Linux NFS client maintainer, Hammerspace trond.myklebust@xxxxxxxxxxxxxxx