Re: [PATCH 0/2] mm: skip memcg for certain address space

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





在 2024/7/18 16:47, Vlastimil Babka (SUSE) 写道:
On 7/18/24 12:38 AM, Qu Wenruo wrote:
[...]
Another question is, I only see this hang with larger folio (order 2 vs
the old order 0) when adding to the same address space.

Does the folio order has anything related to the problem or just a
higher order makes it more possible?

I didn't spot anything in the memcg charge path that would depend on the
order directly, hm. Also what kernel version was showing these soft lockups?

The previous rc kernel. IIRC it's v6.10-rc6.

But that needs extra btrfs patches, or btrfs are still only doing the order-0 allocation, then add the order-0 folio into the filemap.

The extra patch just direct btrfs to allocate an order 2 folio (matching the default 16K nodesize), then attach the folio to the metadata filemap.

With extra coding handling corner cases like different folio sizes etc.


And finally, even without the hang problem, does it make any sense to
skip all the possible memcg charge completely, either to reduce latency
or just to reduce GFP_NOFAIL usage, for those user inaccessible inodes?

Is it common to even use the filemap code for such metadata that can't be
really mapped to userspace?

At least XFS/EXT4 doesn't use filemap to handle their metadata. One of the reason is, btrfs has pretty large metadata structure.
Not only for the regular filesystem things, but also data checksum.

Even using the default CRC32C algo, it's 4 bytes per 4K data.
Thus things can go crazy pretty easily, and that's the reason why btrfs is still sticking to the filemap solution.

How does it even interact with reclaim, do they
become part of the page cache and are scanned by reclaim together with data
that is mapped?

Yes, it's handled just like all other filemaps, it's also using page cache, and all the lru/scanning things.

The major difference is, we only implement a small subset of the address operations:

- write
- release
- invalidate
- migrate
- dirty (debug only, otherwise falls back to filemap_dirty_folio())

Note there is no read operations, as it's btrfs itself triggering the metadata read, thus there is no read/readahead. Thus we're in the full control of the page cache, e.g. determine the folio size to be added into the filemap.

The filemap infrastructure provides 2 good functionalities:

- (Page) Cache
  So that we can easily determine if we really need to read from the
  disk, and this can save us a lot of random IOs.

- Reclaiming

And of course the page cache of the metadata inode won't be cloned/shared to any user accessible inode.

How are the lru decisions handled if there are no references
for PTE access bits. Or can they be even reclaimed, or because there may
e.g. other open inodes pinning this metadata, the reclaim is impossible?

If I understand it correctly, we have implemented release_folio() callback, which does the btrfs metadata checks to determine if we can release the current folio, and avoid releasing folios that's still under IO etc.


(sorry if the questions seem noob, I'm not that much familiar with the page
cache side of mm)

No worry at all, I'm also a newbie on the whole mm part.

Thanks,
Qu


Thanks,
Qu





[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux