Re: [PATCH] nfs.man: document requirements for NFS mounts in a container

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 10 Mar 2022, Chuck Lever III wrote:
> 
> > On Mar 7, 2022, at 7:44 PM, NeilBrown <neilb@xxxxxxx> wrote:
> > 
> > On Fri, 04 Mar 2022, Chuck Lever III wrote:
> >> 
> >> 2. NAT
> >> 
> >> NAT hasn't been mentioned before, but it is a common
> >> deployment scenario where multiple clients can have the
> >> same hostname and local IP address (a private address such
> >> as 192.168.0.55) but the clients all access the same NFS
> >> server.
> > 
> > I can't see how NAT is relevant.  Whether or not clients have the same
> > host name seems to be independent of whether or not they access the
> > server through NAT.
> > What am I missing?
> 
> The usual construction of Linux's nfs_client_id4 includes
> the hostname and client IP address. If two clients behind
> two independent NAT boxes happen to use the same private
> IP address and the same hostname (for example
> "localhost.localdomain" is a common misconfiguration) then
> both of these clients present the same nfs_client_id4
> string to the NFS server.
> 
> Hilarity ensues.

This would only apply to NFSv4.0 (and without migration enabled).
NFSv4.1 and later don't include the IP address in the client identity.

So I think the scenario you describe is primarily a problem of the
hostname being misconfigured.  In NFSv4.0 the normal variety of IP
address can hide that problem.  IF NAT Is used in such a way that two
clients are configured with the same IP address, the defeats the hiding.

I don't think the extra complexity of NAT really makes this more
interesting.   The problem is uniform hostnames, and the fix is the same
for any other case of uniform host names.

> 
> 
> >> The client's identifier needs to be persistent so that:
> >> 
> >> 1. If the server reboots, it can recognize when clients
> >>   are re-establishing their lock and open state versus
> >>   an unfamiliar creating lock and open state that might
> >>   involve files that an existing client has open.
> > 
> > The protocol request clients which are re-establishing state to
> > explicitly say "I am re-establishing state" (e.g. CLAIM_PREVIOUS).
> > clients which are creating new state don't make that claim.
> > 
> > IF the server maintainer persistent state, then the reboot server needs
> > to use the client identifier to find the persistent state, but that is
> > not importantly different from the more common situation of a server
> > which hasn't rebooted and needs to find the appropriate state.
> > 
> > Again - what am I missing?
> 
> The server records each client's nfs_client_id4 and its
> boot verifier.
> 
> It's my understanding that the server is required to reject
> CLAIM_PREVIOUS opens if it does not recognize either the
> nfs_client_id4 string or its boot verifier, since that
> means that the client had no previous state during the last
> most recent server epoch.

I think we are saying the same thing with different words.
When you wrote

    If the server reboots, it can recognize when clients
    are re-establishing their lock and open state 

I think that "validate" is more relevant than "recognize".  The server
knows from the request that an attempt is being made to reestablish
state.  The client identity, credential, and boot verifier are used
to validate that request.

But essentially we are on the same page here.

Thanks,
NeilBrown



[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux