Re: [Lsf-pc] [LSF/MM/BPF TOPIC] Parallelizing filesystem writeback

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Feb 11, 2025 at 12:13:18PM +1100, Dave Chinner wrote:
> Should we be looking towards using a subset of the existing list_lru
> functionality for writeback contexts here? i.e. create a list_lru
> object with N-way scalability, allow the fs to provide an
> inode-number-to-list mapping function, and use the list_lru
> interfaces to abstract away everything physical and cgroup related
> for tracking dirty inodes?
> 
> Then selecting inodes for writeback becomes a list_lru_walk()
> variant depending on what needs to be written back (e.g. physical
> node, memcg, both, everything that is dirty everywhere, etc).

I *suspect* you're referring to abstracting or sharing the sharding
to numa node functionality of list_lru so we can, divide objects
to numa nodes in similar ways for different use cases?

Because list_lru is about reclaim, not writeback, but from my reading
the list_lru sharding to numa nodes was the golden nugget to focus on.

  Luis




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux