Hi Qu, On Sat, Sep 28, 2024 at 02:15:56PM GMT, Qu Wenruo wrote: > [BACKGROUND] > The function filemap_add_folio() charges the memory cgroup, > as we assume all page caches are accessible by user space progresses > thus needs the cgroup accounting. > > However btrfs is a special case, it has a very large metadata thanks to > its support of data csum (by default it's 4 bytes per 4K data, and can > be as large as 32 bytes per 4K data). > This means btrfs has to go page cache for its metadata pages, to take > advantage of both cache and reclaim ability of filemap. > > This has a tiny problem, that all btrfs metadata pages have to go through > the memcgroup charge, even all those metadata pages are not > accessible by the user space, and doing the charging can introduce some > latency if there is a memory limits set. > > Btrfs currently uses __GFP_NOFAIL flag as a workaround for this cgroup > charge situation so that metadata pages won't really be limited by > memcgroup. > > [ENHANCEMENT] > Instead of relying on __GFP_NOFAIL to avoid charge failure, use root > memory cgroup to attach metadata pages. > > Although this needs to export the symbol mem_root_cgroup for > CONFIG_MEMCG, or define mem_root_cgroup as NULL for !CONFIG_MEMCG. > > With root memory cgroup, we directly skip the charging part, and only > rely on __GFP_NOFAIL for the real memory allocation part. > I have a couple of questions: 1. Were you using __GFP_NOFAIL just to avoid ENOMEMs? Are you ok with oom-kills? 2. What the normal overhead of these metadata in real world production environment? I see 4 to 32 bytes per 4k but what's the most used one and does it depend on the data of 4k or something else? 3. Most probably multiple metadata values are colocated on a single 4k page of the btrfs page cache even though the corresponding page cache might be charged to different cgroups. Is that correct? 4. What is stopping us to use reclaimable slab cache for this metadata? thanks, Shakeel