On Thu, Jul 04, 2024 at 02:31:46PM -0400, Mike Snitzer wrote: > Some new layout misses the entire point of having localio work for > NFSv3 and NFSv4. NFSv3 is very ubiquitous. I'm getting tird of bringing up this "oh NFSv3" again and again without any explanation of why that matters for communication insides the same Linux kernel instance with a kernel that obviously requires patching. Why is running an obsolete protocol inside the same OS instance required. Maybe it is, but if so it needs a very good explanation. > And in this localio series, flexfiles is trained to use localio. > (Which you apparently don't recognize or care about because nfsd > doesn't have flexfiles server support). And you fail to explain why it matters. You are trying to sell this code, you better have an explanation why it's complicated and convoluted as hell. So far we are running in circles but there has been no clear explanation of use cases. > > > Can the client use its localio access to bypass that since it's not > > > going across the network anymore? Maybe by using open_by_handle_at on > > > the NFS share on a guessed filehandle? I think we need to ensure that > > > that isn't possible. > > > > If a file system is shared by containers and users in containers have > > the capability to use open_by_handle_at the security model is already > > broken without NFS or localio involved. > > Containers deployed by things like podman.io and kubernetes are > perfectly happy to allow containers permission to drive knfsd threads > in the host kernel. That this is foreign to you is odd. > > An NFS client that happens to be on the host should work perfectly > fine too (if it has adequate permissions). Can you please stop the personal attacks? I am just stating the fact that IF the containers using the NFS mount has access to the exported file systems and the privileges to use open by handle there is nothing nfsd can do about security as the container has full access to the file system anyway. That's a fact and how you deploy the various containers is completely irrelevant. It is also in case that you didn't notice it last time about the _client_ containers as stated by me and the original poster I replied to. > > > I wonder if it's also worthwhile to gate localio access on an export > > > option, just out of an abundance of caution. > > > > export and mount option. We're speaking a non-standard side band > > protocol here, there is no way that should be done without explicit > > opt-in from both sides. > > That is already provided my existing controls. With both Kconfig > options that default to N, and the ability to disable the use of > localio entirely even if enabled in the Kconfig: > echo N > /sys/module/nfs/parameters/localio_enabled And all of that is global and not per-mount or nfsd instance, which doesn't exactly scale to a multi-tenant container hosting setup.