Re: [PATCH] btrfs: root memcgroup for metadata filemap_add_folio()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





在 2024/10/1 02:53, Shakeel Butt 写道:
Hi Qu,

On Sat, Sep 28, 2024 at 02:15:56PM GMT, Qu Wenruo wrote:
[BACKGROUND]
The function filemap_add_folio() charges the memory cgroup,
as we assume all page caches are accessible by user space progresses
thus needs the cgroup accounting.

However btrfs is a special case, it has a very large metadata thanks to
its support of data csum (by default it's 4 bytes per 4K data, and can
be as large as 32 bytes per 4K data).
This means btrfs has to go page cache for its metadata pages, to take
advantage of both cache and reclaim ability of filemap.

This has a tiny problem, that all btrfs metadata pages have to go through
the memcgroup charge, even all those metadata pages are not
accessible by the user space, and doing the charging can introduce some
latency if there is a memory limits set.

Btrfs currently uses __GFP_NOFAIL flag as a workaround for this cgroup
charge situation so that metadata pages won't really be limited by
memcgroup.

[ENHANCEMENT]
Instead of relying on __GFP_NOFAIL to avoid charge failure, use root
memory cgroup to attach metadata pages.

Although this needs to export the symbol mem_root_cgroup for
CONFIG_MEMCG, or define mem_root_cgroup as NULL for !CONFIG_MEMCG.

With root memory cgroup, we directly skip the charging part, and only
rely on __GFP_NOFAIL for the real memory allocation part.


I have a couple of questions:

1. Were you using __GFP_NOFAIL just to avoid ENOMEMs? Are you ok with
oom-kills?

The NOFAIL flag is inherited from the memory allocation for metadata tree blocks.

Although btrfs has error handling already for all the possible ENOMEMs, hitting ENOMEMs for metadata may still be a big problem, thus all my previous attempt to remove NOFAIL flag all got rejected.


2. What the normal overhead of these metadata in real world production
environment? I see 4 to 32 bytes per 4k but what's the most used one and
does it depend on the data of 4k or something else?

What did you mean by the "overhead" part? Did you mean the checksum?

If so, there is none, because btrfs store metadata checksum inside the tree block (thus the page cache). The first 32 bytes of a tree block are always reserved for metadata checksum.

The tree block size depends on the mkfs time option nodesize, is 16K by default, and that's the most common value.


3. Most probably multiple metadata values are colocated on a single 4k
page of the btrfs page cache even though the corresponding page cache
might be charged to different cgroups. Is that correct?

Not always a single 4K page, it depends on the nodesize, which is 16K by default.

Otherwise yes, the metadata page cache can be charged to different cgroup, depending on the caller's context. And we do not want to charge the metadata page cache to the caller's cgroup, since it's really a shared resource and the caller has no way to directly accessing the page cache.

Not charging the metadata page cache will align btrfs more to the ext4/xfs, which all uses regular page allocation without attaching to a filemap.


4. What is stopping us to use reclaimable slab cache for this metadata?

Josef has tried this before, the attempt failed on the shrinker part, and partly due to the size.

Btrfs has very large metadata compared to all other fses, not only due to the COW nature and a larger tree block size (16K by default), but also the extra data checksum (4 bytes per 4K by default, 32 bytes per 4K maximum).

On a real world system, the metadata itself can easily go hundreds of GiBs, thus a shrinker is definitely needed.

Thus so far btrfs is using page cache for its metadata cache.

Thanks,
Qu


thanks,
Shakeel





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux