On Tue, Oct 01, 2024 at 07:30:38AM GMT, Qu Wenruo wrote: > > > 在 2024/10/1 02:53, Shakeel Butt 写道: > > Hi Qu, > > > > On Sat, Sep 28, 2024 at 02:15:56PM GMT, Qu Wenruo wrote: > > > [BACKGROUND] > > > The function filemap_add_folio() charges the memory cgroup, > > > as we assume all page caches are accessible by user space progresses > > > thus needs the cgroup accounting. > > > > > > However btrfs is a special case, it has a very large metadata thanks to > > > its support of data csum (by default it's 4 bytes per 4K data, and can > > > be as large as 32 bytes per 4K data). > > > This means btrfs has to go page cache for its metadata pages, to take > > > advantage of both cache and reclaim ability of filemap. > > > > > > This has a tiny problem, that all btrfs metadata pages have to go through > > > the memcgroup charge, even all those metadata pages are not > > > accessible by the user space, and doing the charging can introduce some > > > latency if there is a memory limits set. > > > > > > Btrfs currently uses __GFP_NOFAIL flag as a workaround for this cgroup > > > charge situation so that metadata pages won't really be limited by > > > memcgroup. > > > > > > [ENHANCEMENT] > > > Instead of relying on __GFP_NOFAIL to avoid charge failure, use root > > > memory cgroup to attach metadata pages. > > > > > > Although this needs to export the symbol mem_root_cgroup for > > > CONFIG_MEMCG, or define mem_root_cgroup as NULL for !CONFIG_MEMCG. > > > > > > With root memory cgroup, we directly skip the charging part, and only > > > rely on __GFP_NOFAIL for the real memory allocation part. > > > > > > > I have a couple of questions: > > > > 1. Were you using __GFP_NOFAIL just to avoid ENOMEMs? Are you ok with > > oom-kills? > > The NOFAIL flag is inherited from the memory allocation for metadata tree > blocks. > > Although btrfs has error handling already for all the possible ENOMEMs, > hitting ENOMEMs for metadata may still be a big problem, thus all my > previous attempt to remove NOFAIL flag all got rejected. __GFP_NOFAIL for memcg charging is reasonable in many scenarios. Memcg oom-killer is enabled for __GFP_NOFAIL and going over limit and getting oom-killed is totally reasonable. Orthogonal to the discussion though. > > > > > 2. What the normal overhead of these metadata in real world production > > environment? I see 4 to 32 bytes per 4k but what's the most used one and > > does it depend on the data of 4k or something else? > > What did you mean by the "overhead" part? Did you mean the checksum? > To me this metadata is overhead, so yes checksum is something not the actual data stored on the storage. > If so, there is none, because btrfs store metadata checksum inside the tree > block (thus the page cache). > The first 32 bytes of a tree block are always reserved for metadata > checksum. > > The tree block size depends on the mkfs time option nodesize, is 16K by > default, and that's the most common value. Sorry I am not very familiar with btrfs. What is tree block? > > > > > 3. Most probably multiple metadata values are colocated on a single 4k > > page of the btrfs page cache even though the corresponding page cache > > might be charged to different cgroups. Is that correct? > > Not always a single 4K page, it depends on the nodesize, which is 16K by > default. > > Otherwise yes, the metadata page cache can be charged to different cgroup, > depending on the caller's context. > And we do not want to charge the metadata page cache to the caller's cgroup, > since it's really a shared resource and the caller has no way to directly > accessing the page cache. > > Not charging the metadata page cache will align btrfs more to the ext4/xfs, > which all uses regular page allocation without attaching to a filemap. > Can you point me to ext4/xfs code where they are allocating uncharged memory for their metadata? > > > > 4. What is stopping us to use reclaimable slab cache for this metadata? > > Josef has tried this before, the attempt failed on the shrinker part, and > partly due to the size. > > Btrfs has very large metadata compared to all other fses, not only due to > the COW nature and a larger tree block size (16K by default), but also the > extra data checksum (4 bytes per 4K by default, 32 bytes per 4K maximum). > > On a real world system, the metadata itself can easily go hundreds of GiBs, > thus a shrinker is definitely needed. This amount of uncharged memory is concerning which becomes part of system overhead and may impact the schedulable memory for the datacenter environment. Overall the code seems fine and no pushback from me if btrfs maintainers are ok with this. I think btrfs should move to slab+shrinker based solution for this metadata unless there is deep technical reason not to. thanks, Shakeel