Re: [PATCH 0/3] fanotify support for btrfs sub-volumes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Oct 27, 2023 at 4:17 PM Josef Bacik <josef@xxxxxxxxxxxxxx> wrote:
>
> On Thu, Oct 26, 2023 at 10:46:01PM -0700, Christoph Hellwig wrote:
> > I think you're missing the point.  A bunch of statx fields might be
> > useful, but they are not solving the problem.  What you need is
> > a separate vfsmount per subvolume so that userspace sees when it
> > is crossing into it.  We probably can't force this onto existing
> > users, so it needs a mount, or even better on-disk option but without
> > that we're not getting anywhere.
> >
>
> We have this same discussion every time, and every time you stop responding
> after I point out the problems with it.
>
> A per-subvolume vfsmount means that /proc/mounts /proc/$PID/mountinfo becomes
> insanely dumb.  I've got millions of machines in this fleet with thousands of
> subvolumes.  One of our workloads fires up several containers per task and runs
> multiple tasks per machine, so on the order of 10-20k subvolumes.
>

I think it is probably just as common to see similar workloads using overlayfs
for containers, especially considering the fact that the more you
scale the number
of containers, the more you need the inode page cache sharing between them.

Overlayfs has sb/vfsmount per instance, so any users having problems with
huge number of mounts would have already complained about it and maybe
they have because...

> So now I've got thousands of entries in /proc/mounts, and literally every system
> related tool parses /proc/mounts every 4 nanoseconds, now I'm significantly
> contributing to global warming from the massive amount of CPU usage that is
> burned parsing this stupid file.
>

...after Miklos sorts out the new list/statmount() syscalls and mount
tree change
notifications, maybe vfsmount per btrfs subvol could be reconsidered? ;)

> Additionally, now you're ending up with potentially sensitive information being
> leaked through /proc/mounts that you didn't expect to be leaked before.  I've
> got users complaining to be me because "/home/john/twilight_fanfic" showed up in
> their /proc/mounts.
>

This makes me wonder.
I understand why using diverse st_dev is needed for btrfs snapshots
where the same st_ino can have different revisions.
I am not sure I understand why diverse st_dev is needed for subvols
that are created for containerisation reasons.
Don't files in sub-vols have unique st_ino anyway?
Is the st_dev mitigation for sub-vol a must or just an implementation
convenience?

> And then there's the expiry thing.  Now they're just directories, reclaim works
> like it works for anything else.  With auto mounts they have to expire at some
> point, which makes them so much more heavier weight than we want to sign up for.
> Who knows what sort of scalability issues we'll run into.
>

I agree that this aspect of auto mount is unfortunate, but I think it would
benefit other fs that support auto mount to improve reclaiming of auto mounts.

In the end, I think that we all understand that the legacy btrfs behavior
is not going away without an opt-in, but I think it would be a good outcome
if users could choose the tradeoff between efficiency of single mount vs.
working well with features like nfs export and fanotify subvol watch.

Having an incentive to migrate to the "multi-sb" btrfs mode, would create
the pressure from end users on distros and from there to project leaders
to fix the issues that you mentioned related to huse number of mounts
and auto mount reclaim.

Thanks,
Amir.





[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux