Re: [PATCH] VFS/BTRFS/NFSD: provide more unique inode number for btrfs export

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Aug 16, 2021 at 1:21 AM NeilBrown <neilb@xxxxxxx> wrote:
>
> On Mon, 16 Aug 2021, Roman Mamedov wrote:
> >
> > I wondered a bit myself, what are the downsides of just doing the
> > uniquefication inside Btrfs, not leaving that to NFSD?
> >
> > I mean not even adding the extra stat field, just return the inode itself with
> > that already applied. Surely cannot be any worse collision-wise, than
> > different subvolumes straight up having the same inode numbers as right now?
> >
> > Or is it a performance concern, always doing more work, for something which
> > only NFSD has needed so far.
>
> Any change in behaviour will have unexpected consequences.  I think the
> btrfs maintainers perspective is they they don't want to change
> behaviour if they don't have to (which is reasonable) and that currently
> they don't have to (which probably means that users aren't complaining
> loudly enough).
>
> NFS export of BTRFS is already demonstrably broken and users are
> complaining loudly enough that I can hear them ....  though I think it
> has been broken like this for 10 years, do I wonder that I didn't hear
> them before.
>
> If something is perceived as broken, then a behaviour change that
> appears to fix it is more easily accepted.
>
> However, having said that I now see that my latest patch is not ideal.
> It changes the inode numbers associated with filehandles of objects in
> the non-root subvolume.  This will cause the Linux NFS client to treat
> the object as 'stale' For most objects this is a transient annoyance.
> Reopen the file or restart the process and all should be well again.
> However if the inode number of the mount point changes, you will need to
> unmount and remount.  That is more somewhat more of an annoyance.
>
> There are a few ways to handle this more gracefully.
>
> 1/ We could get btrfs to hand out new filehandles as well as new inode
> numbers, but still accept the old filehandles.  Then we could make the
> inode number reported be based on the filehandle.  This would be nearly
> seamless but rather clumsy to code.  I'm not *very* keen on this idea,
> but it is worth keeping in mind.
>

So objects would change their inode number after nfs inode cache is
evicted and while nfs filesystem is mounted. That does not sound ideal.

But I am a bit confused about the problem.
If the export is of the btrfs root, then nfs client cannot access any
subvolumes (right?) - that was the bug report, so the value of inode
numbers in non-root subvolumes is not an issue.
If export is of non-root subvolume, then why bother changing anything
at all? Is there a need to traverse into sub-sub-volumes?

> 2/ We could add a btrfs mount option to control whether the uniquifier
> was set or not.  This would allow the sysadmin to choose when to manage
> any breakage.  I think this is my preference, but Josef has declared an
> aversion to mount options.
>
> 3/ We could add a module parameter to nfsd to control whether the
> uniquifier is merged in.  This again gives the sysadmin control, and it
> can be done despite any aversion from btrfs maintainers.  But I'd need
> to overcome any aversion from the nfsd maintainers, and I don't know how
> strong that would be yet. (A new export option isn't really appropriate.
> It is much more work to add an export option than the add a mount option).
>

That is too bad, because IMO from users POV, "fsid=btrfsroot" or "cross-subvol"
export option would have been a nice way to describe and opt-in to this new
functionality.

But let's consider for a moment the consequences of enabling this functionality
automatically whenever exporting a btrfs root volume without "crossmnt":

1. Objects inside a subvol that are inaccessible(?) with current
nfs/nfsd without
    "crossmnt" will become accessible after enabling the feature -
this will match
    the user experience of accessing btrfs on the host
2. The inode numbers of the newly accessible objects would not match the inode
    numbers on the host fs (no big deal?)
3. The inode numbers of objects in a snapshot would not match the inode
    numbers of the original (pre-snapshot) objects (acceptable tradeoff for
    being able to access the snapshot objects without bloating /proc/mounts?)
4. The inode numbers of objects in a subvol observed via this "cross-subvol"
    export would not match the inode numbers of the same objects observed
    via an individual subvol export
5. st_ino conflicts are possible when multiplexing subvol id and inode number.
    overlayfs resolved those conflicts by allocating an inode number from a
    reserved non-persistent inode range, which may cause objects to change
    their inode number during the lifetime on the filesystem (sensible
tradeoff?)

I think that #4 is a bit hard to swallow and #3 is borderline acceptable...
Both and quite hard to document and to set expectations as a non-opt-in
change of behavior when exporting btrfs root.

IMO, an nfsd module parameter will give some control and therefore is
a must, but it won't make life easier to document and set user expectations
when the semantics are not clearly stated in the exports table.

You claim that "A new export option isn't really appropriate."
but your only argument is that "It is much more work to add
an export option than the add a mount option".

With all due respect, for this particular challenge with all the
constraints involved, this sounds like a pretty weak argument.

Surely, adding an export option is easier than slowly changing all
userspace tools to understand subvolumes? a solution that you had
previously brought up.

Can you elaborate some more about your aversion to a new
export option.

Thanks,
Amir.



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux