Re: [PATCH] nfs.man: document requirements for NFS mounts in a container

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 02 Mar 2022, Chuck Lever III wrote:
> 
> > On Feb 28, 2022, at 10:43 PM, NeilBrown <neilb@xxxxxxx> wrote:
> > 
> > 
> > When mounting NFS filesystems in a network namespace using v4, some care
> > must be taken to ensure a unique and stable client identity.
> > Add documentation explaining the requirements for container managers.
> > 
> > Signed-off-by: NeilBrown <neilb@xxxxxxx>
> > ---
> > 
> > NOTE I originally suggested using uuidgen to generate a uuid from a
> > container name.  I've changed it to use the name as-is because I cannot
> > see a justification for using a uuid - though I think that was suggested
> > somewhere in the discussion.
> > If someone would like to provide that justification, I'm happy to
> > include it in the document.
> > 
> > Thanks,
> > NeilBrown
> > 
> > 
> > utils/mount/nfs.man | 63 +++++++++++++++++++++++++++++++++++++++++++++
> > 1 file changed, 63 insertions(+)
> > 
> > diff --git a/utils/mount/nfs.man b/utils/mount/nfs.man
> > index d9f34df36b42..4ab76fb2df91 100644
> > --- a/utils/mount/nfs.man
> > +++ b/utils/mount/nfs.man
> > @@ -1844,6 +1844,69 @@ export pathname, but not both, during a remount.  For example,
> > merges the mount option
> > .B ro
> > with the mount options already saved on disk for the NFS server mounted at /mnt.
> > +.SH "NFS IN A CONTAINER"
> 
> To be clear, this explanation is about the operation of the
> Linux NFS client in a container environment. The server has
> different needs that do not appear to be addressed here.
> The section title should be clear that this information
> pertains to the client.

The whole man page is only about the client, but I agree that clarity is
best.  I've changed the section heading to

    NFS MOUNTS IN A CONTAINER

> 
> 
> > +When NFS is used to mount filesystems in a container, and specifically
> > +in a separate network name-space, these mounts are treated as quite
> > +separate from any mounts in a different container or not in a
> > +container (i.e. in a different network name-space).
> 
> It might be helpful to provide an introductory explanation of
> how mount works in general in a namespaced environment. There
> might already be one somewhere. The above text needs to be
> clear that we are not discussing the mount namespace.

Mount namespaces are completely irrelevant for this discussion.
This is "specifically" about "network name-spaces" a I wrote.
Do I need to say more than that?
Maybe a sentence "Mount namespaces are not relevant" ??

> 
> 
> > +.P
> > +In the NFSv4 protocol, each client must have a unique identifier.
> 
> ... each client must have a persistent and globally unique
> identifier.

I dispute "globally".  The id only needs to be unique among clients of
a given NFS server.
I also dispute "persistent" in the context of "must".
Unless I'm missing something, a lack of persistence only matters when a
client stops while still holding state, and then restarts within the
lease period.  It will then be prevented from establishing conflicting
state until the lease period ends.  So persistence is good, but is not a
hard requirement.  Uniqueness IS a hard requirement among concurrent
clients of the one server.

> 
> 
> > +This is used by the server to determine when a client has restarted,
> > +allowing any state from a previous instance can be discarded.
> 
> Lots of passive voice here :-)
> 
> The server associates a lease with the client's identifier
> and a boot instance verifier. The server attaches all of
> the client's file open and lock state to that lease, which
> it preserves until the client's boot verifier changes.

I guess I"m a passivist.  If we are going for that level of detail we
need to mention lease expiry too.

 .... it preserves until the lease time passes without any renewal from
      the client, or the client's boot verifier changes.

In another email you add:

> Oh and also, this might be a good opportunity to explain
> how the server requires that the client use not only the
> same identifier string, but also the same principal to
> reattach itself to its open and lock state after a server
> reboot.
> 
> This is why the Linux NFS client attempts to use Kerberos
> whenever it can for this purpose. Using AUTH_SYS invites
> other another client that happens to have the same identifier
> to trigger the server to purge that client's open and lock
> state.

How relevant is this to the context of a container?
How much extra context would be need to add to make the mention of
credentials coherent?
Maybe we should add another section about credentials, and add it just
before this one??

> 
> 
> > So any two
> > +concurrent clients that might access the same server MUST have
> > +different identifiers, and any two consecutive instances of the same
> > +client SHOULD have the same identifier.
> 
> Capitalized MUST and SHOULD have specific meanings in IETF
> standards that are probably not obvious to average readers
> of man pages. To average readers, this looks like shouting.
> Can you use something a little friendlier?
> 

How about:

   Any two concurrent clients that might access the same server must
   have different identifiers for correct operation, and any two
   consecutive instances of the same client should have the same
   identifier for optimal handling of an unclean restart.

> 
> > +.P
> > +Linux constructs the identifier (referred to as 
> > +.B co_ownerid
> > +in the NFS specifications) from various pieces of information, three of
> > +which can be controlled by the sysadmin:
> > +.TP
> > +Hostname
> > +The hostname can be different in different containers if they
> > +have different "UTS" name-spaces.  If the container system ensures
> > +each container sees a unique host name,
> 
> Actually, it turns out that is a pretty big "if". We've
> found that our cloud customers are not careful about
> setting unique hostnames. That's exactly why the whole
> uniquifier thing is so critical!

:-)  I guess we keep it as "if" though, not "IF" ....

> 
> 
> > then this is
> > +sufficient for a correctly functioning NFS identifier.
> > +The host name is copied when the first NFS filesystem is mounted in
> > +a given network name-space.  Any subsequent change in the apparent
> > +hostname will not change the NFSv4 identifier.
> 
> The purpose of using a uuid here is that, given its
> definition in RFC 4122, it has very strong global
> uniqueness guarantees.

A uuid generated from a given string (uuidgen -N $name ...) has the same
uniqueness as the $name.  Turning it into a uuid doesn't improve the
uniqueness.  It just provides a standard format and obfuscates the
original.  Neither of those seem necessary here.
I think Ben is considering using /etc/mechine-id.  Creating a uuid from
that does make it any better.

> 
> Using a UUID makes hostname uniqueness irrelevant.

Only if the UUID is created appropriately.  If, for example, it is
created with -N from some name that is unique on the host, then it needs
to be combined with the hostname to get sufficient uniqueness.

> 
> Again, I think our goal should be hiding all of this
> detail from administrators, because once we get this
> mechanism working correctly, there is absolutely no
> need for administrators to bother with it.

Except when things break.  Then admins will appreciate having the
details so they can track down the breakage.  My desktop didn't boot
this morning.  Systemd didn't tell me why it was hanging though I
eventually discovered that it was "wicked.service" that wasn't reporting
success.  So I'm currently very focused on the need to provide clarity
to sysadmins, even of "irrelevant" details.

But this documentation isn't just for sysadmins, it is for container
developers too, so they can find out how to make their container work
with NFS.

> 
> 
> The remaining part of this text probably should be
> part of the man page for Ben's tool, or whatever is
> coming next.

My position is that there is no need for any tool.  The total amount of
code needed is a couple of lines as presented in the text below.  Why
provide a wrapper just for that?
We *cannot* automatically decide how to find a name or where to store a
generated uuid, so there is no added value that a tool could provide.

We cannot unilaterally fix container systems.  We can only tell people
who build these systems of the requirements for NFS.

Thanks,
NeilBrown

> 
> 
> > +.TP
> > +.B nfs.nfs4_unique_id
> > +This module parameter is the same for all containers on a given host
> > +so it is not useful to differentiate between containers.
> > +.TP
> > +.B /sys/fs/nfs/client/net/identifier
> > +This virtual file (available since Linux 5.3) is local to the network
> > +name-space in which it is accessed and so can provided uniqueness between
> > +containers when the hostname is uniform among containers.
> > +.RS
> > +.PP
> > +This value is empty on name-space creation.
> > +If the value is to be set, that should be done before the first
> > +mount (much as the hostname is copied before the first mount).
> > +If the container system has access to some sort of per-container
> > +identity, then a command like
> > +.RS 4
> > +echo "$CONTAINER_IDENTITY" \\
> > +.br
> > +   > /sys/fs/nfs/client/net/identifier 
> > +.RE
> > +might be suitable.  If the container system provides no stable name,
> > +but does have stable storage, then something like
> > +.RS 4
> > +[ -s /etc/nfsv4-uuid ] || uuidgen > /etc/nfsv4-uuid && 
> > +.br
> > +cat /etc/nfsv4-uuid > /sys/fs/nfs/client/net/identifier 
> > +.RE
> > +would suffice.
> > +.PP
> > +If a container has neither a stable name nor stable (local) storage,
> > +then it is not possible to provide a stable identifier, so providing
> > +a random one to ensure uniqueness would be best
> > +.RS 4
> > +uuidgen > /sys/fs/nfs/client/net/identifier
> > +.RE
> > +.RE
> > .SH FILES
> > .TP 1.5i
> > .I /etc/fstab
> > -- 
> > 2.35.1
> > 
> 
> --
> Chuck Lever
> 
> 
> 
> 




[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux