Re: [PATCH v6 2/2] fuse: add new function to invalidate cache for all inodes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Feb 18 2025, Miklos Szeredi wrote:

> On Tue, 18 Feb 2025 at 01:55, Dave Chinner <david@xxxxxxxxxxxxx> wrote:
>>
>> On Mon, Feb 17, 2025 at 01:32:28PM +0000, Luis Henriques wrote:
>> > Currently userspace is able to notify the kernel to invalidate the cache
>> > for an inode.  This means that, if all the inodes in a filesystem need to
>> > be invalidated, then userspace needs to iterate through all of them and do
>> > this kernel notification separately.
>> >
>> > This patch adds a new option that allows userspace to invalidate all the
>> > inodes with a single notification operation.  In addition to invalidate
>> > all the inodes, it also shrinks the sb dcache.
>>
>> You still haven't justified why we should be exposing this
>> functionality in a low level filesystem ioctl out of sight of the
>> VFS.
>>
>> User driven VFS cache invalidation has long been considered a
>> DOS-in-waiting, hence we don't allow user APIs to invalidate caches
>> like this. This is one of the reasons that /proc/sys/vm/drop_caches
>> requires root access - it's system debug and problem triage
>> functionality, not a production system interface....
>>
>> Every other situation where filesystems invalidate vfs caches is
>> during mount, remount or unmount operations.  Without actually
>> explaining how this functionality is controlled and how user abuse
>> is not possible (e.g. explain the permission model and/or how only
>> root can run this operation), it is not really possible to determine
>> whether we should unconditional allow VFS cache invalidation outside
>> of the existing operation scope....
>
> I think you are grabbing the wrong end of the stick here.
>
> This is not about an arbitrary user being able to control caching
> behavior of a fuse filesystem.  It's about the filesystem itself being
> able to control caching behavior.
>
> I'm not arguing for the validity of this particular patch, just saying
> that something like this could be valid.  And as explained in my other
> reply there's actually a real problem out there waiting for a
> solution.

The problem I'm trying to solve is that, if a filesystem wants to ask the
kernel to get rid of all inodes, it has to request the kernel to forget
each one, individually.  The specific filesystem I'm looking at is CVMFS,
which is a read-only filesystem that needs to be able to update the full
set of filesystem objects when a new generation snapshot becomes
available.

The obvious problem with the current solution (i.e. walking through all
the inodes) is that it is slow.  And if new snapshot generations succeed
fast enough, memory usage becomes a major issue -- enough to have a helper
daemon monitoring memory and do a full remount when it passes some
predefined threshold.

Obviously, I'm open to other solutions, including the one suggested by
Miklos in is other reply -- to get rid of the N LRU inodes.  I'm not sure
how that could be implemented yet, but I can have a look into that if you
think that's the right interface.

Cheers,
-- 
Luís

> Thanks,
> Miklos
>
>
>>
>> FInally, given that the VFS can only do best-effort invalidation
>> and won't provide FUSE (or any other filesystem) with any cache
>> invalidation guarantees outside of specific mount and unmount
>> contexts, I'm not convinced that this is actually worth anything...
>>
>> -Dave.
>> --
>> Dave Chinner
>> david@xxxxxxxxxxxxx





[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux