Re: Grace period

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



10.04.2012 22:28, Jeff Layton пишет:
On Tue, 10 Apr 2012 19:36:26 +0400
Stanislav Kinsbursky<skinsbursky@xxxxxxxxxxxxx>  wrote:

10.04.2012 17:39, bfields@xxxxxxxxxxxx пишет:
On Tue, Apr 10, 2012 at 02:56:12PM +0400, Stanislav Kinsbursky wrote:
09.04.2012 22:11, bfields@xxxxxxxxxxxx пишет:
Since NFSv4 doesn't have a separate MOUNT protocol, clients need to be
able to do readdir's and lookups to get to exported filesystems.  We
support this in the Linux server by exporting all the filesystems from
"/" on down that must be traversed to reach a given filesystem.  These
exports are very restricted (e.g. only parents of exports are visible).


Ok, thanks for explanation.
So, this pseudoroot looks like a part of NFS server internal
implementation, but not a part of a standard. That's good.

Why does it prevents implementing of check for "superblock-network
namespace" pair on NFS server start and forbid (?) it in case of
this pair is shared already in other namespace? I.e. maybe this
pseudoroot can be an exclusion from this rule?

That might work.  It's read-only and consists only of directories, so
the grace period doesn't affect it.


I've just realized, that this per-sb grace period won't work.
I.e., it's a valid situation, when two or more containers located on
the same filesystem, but shares different parts of it. And there is
not conflict here at all.

Well, there may be some conflict in that a file could be hardlinked into
both subtrees, and that file could be locked from users of either
export.


Is this case handled if both links or visible in the same export?
But anyway, this is not that bad. I.e it doesn't make things unpredictable.
Probably, there are some more issues like this one (bind-mounting, for example).
But I think, that it's root responsibility to handle such problems.


Well, it's a problem and one that you'll probably have to address to
some degree. In truth, the fact that you're exporting different
subtrees in different containers is immaterial since they're both on
the same fs and filehandles don't carry any info about the path in and
of themselves...

Suppose for instance that we have a hardlinked file that's available
from two different exports in two different containers. The grace
period ends in container #1, so that nfsd starts servicing normal lock
requests. An application takes a lock on that hardlinked file. In the
meantime, a client of container #2 attempts to reclaim the lock that he
previously held on that same inode and gets denied.


That's just one example. The scarier case is that the client of
container #1 takes the lock, alters the file and then drops it again
with the client of container #2 none the wiser. Now the file got
altered while client #2 thought he held a lock on it. That won't be fun
to track down...

This sort of thing is one of the reasons I've been saying that the
grace period is really a property of the underlying filesystem and not
of nfsd itself. Of course, we do have to come up with a way to handle
the grace period that doesn't involve altering every exportable fs.


I see.
But, frankly speaking, looks like the problem you are talking about is another task (comparing to containerization). I.e. making NFSd work per network namespace is somewhat different comparing to these "shared file system" issues (which are actually a part of mount namespace).



--
Best regards,
Stanislav Kinsbursky
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux