Re: [PATCH v2] BTRFS/NFSD: provide more unique inode number for btrfs export

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> > I do have one general question about the expected behavior -
> > In his comment to the LWN article [2], Josef writes:
> >
> > "The st_dev thing is unfortunate, but again is the result of a lack of
> > interfaces.
> >  Very early on we had problems with rsync wandering into snapshots and
> >  copying loads of stuff. Find as well would get tripped up.
> >  The way these tools figure out if they've wandered into another file system
> >  is if the st_dev is different..."
> >
> > If your plan goes through to export the main btrfs filesystem and
> > subvolumes as a uniform st_dev namespace to the NFS client,
> > what's to stop those old issues from remerging on NFS exported btrfs?
>
> That comment from Josef was interesting.... It doesn't align with
> Commit 3394e1607eaf ("Btrfs: Give each subvol and snapshot their own anonymous devid")
> when Chris Mason introduced the per-subvol device number with the
> justification that:
>     Each subvolume has its own private inode number space, and so we need
>     to fill in different device numbers for each subvolume to avoid confusing
>     applications.
>
> But I understand that history can be messy and maybe there were several
> justifications of which Josef remembers one and Chris reported
> another.
>

I don't see a contradiction between the two reasons.
Reporting two different objects with the same st_ino;st_dev is a problem
and so is rsync wandering into subvolumes is another problem.

Separate st_dev solves the first problem and leaves the behavior
or rsync in the hands of the user (i.e. rsync --one-file-system).

> If rsync did, in fact, wander into subvols and didn't get put off by the
> duplicate inode numbers (like 'find' does), then it would still do that
> when accessing btrfs over NFS.  This has always been the case.  Chris'
> "fix" only affected local access, it didn't change NFS access at all.
>

Right, so the right fix IMO would be to provide similar semantics
to the NFS client, like your first patch set tried to do.

> >
> > IOW, the user experience you are trying to solve is inability of 'find'
> > to traverse the unified btrfs namespace, but Josef's comment indicates
> > that some users were explicitly unhappy from 'find' trying to traverse
> > into subvolumes to begin with.
>
> I believe that even 12 years ago, find would have complained if it saw a
> directory with the same inode as an ancestor.  Chris's fix wouldn't
> prevent find from entering in that case, because it wouldn't enter
> anyway.
>
> >
> > So is there really a globally expected user experience?
>
> No.  Everybody wants what they want.  There is some overlap, not no
> guarantees.  That is the unavoidable consequence of ignoring standards
> when implementing functionality.
>
> > If not, then I really don't see how an nfs export option can be avoided.
>
> And I really don't see how an nfs export option would help...  Different
> people within and organisation and using the same export might have
> different expectations.

That's true.
But if admin decides to export a specific btrfs mount as a non-unified
filesystem, then NFS clients can decide whether ot not to auto-mount the
exported subvolumes and different users on the client machine can decide
if they want to rsync or rsync --one-file-system, just as they would with
local btrfs.

And maybe I am wrong, but I don't see how the decision on whether to
export a non-unified btrfs can be made a btrfs option or a nfsd global
option, that's why I ended up with export option.

Thanks,
Amir.



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux