Re: [PATCH v8 1/6] list_lru: allows explicit memcg and NUMA node selection

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2023/12/5 01:48, Nhat Pham wrote:
> On Mon, Dec 4, 2023 at 12:30 AM Chengming Zhou <chengming.zhou@xxxxxxxxx> wrote:
>>
>> On 2023/12/1 04:35, Johannes Weiner wrote:
>>> On Thu, Nov 30, 2023 at 12:07:41PM -0800, Nhat Pham wrote:
>>>> On Thu, Nov 30, 2023 at 11:57 AM Matthew Wilcox <willy@xxxxxxxxxxxxx> wrote:
>>>>>
>>>>> On Thu, Nov 30, 2023 at 11:40:18AM -0800, Nhat Pham wrote:
>>>>>> This patch changes list_lru interface so that the caller must explicitly
>>>>>> specify numa node and memcg when adding and removing objects. The old
>>>>>> list_lru_add() and list_lru_del() are renamed to list_lru_add_obj() and
>>>>>> list_lru_del_obj(), respectively.
>>>>>
>>>>> Wouldn't it be better to add list_lru_add_memcg() and
>>>>> list_lru_del_memcg() and have:
>>>>>
>>>>> +bool list_lru_del(struct list_lru *lru, struct list_head *item)
>>>>> +{
>>>>> +       int nid = page_to_nid(virt_to_page(item));
>>>>> +       struct mem_cgroup *memcg = list_lru_memcg_aware(lru) ?
>>>>> +               mem_cgroup_from_slab_obj(item) : NULL;
>>>>> +
>>>>> +       return list_lru_del_memcg(lru, item, nid, memcg);
>>>>> +}
>>>>>
>>>>> Seems like _most_ callers will want the original versions and only
>>>>> a few will want the explicit memcg/nid versions.  No?
>>>>>
>>>>
>>>> I actually did something along that line in earlier iterations of this
>>>> patch series (albeit with poorer naming - __list_lru_add() instead of
>>>> list_lru_add_memcg()). The consensus after some back and forth was
>>>> that the original list_lru_add() was not a very good design (the
>>>> better one was this new version that allows for explicit numa/memcg
>>>> selection). So I agreed to fix it everywhere as a prep patch.
>>>>
>>>> I don't have strong opinions here to be completely honest, but I do
>>>> think this new API makes more sense (at the cost of quite a bit of
>>>> elbow grease to fix every callsites and extra reviewing).
>>>
>>> Maybe I can shed some light since I was pushing for doing it this way.
>>>
>>> The quiet assumption that 'struct list_head *item' is (embedded in) a
>>> slab object that is also charged to a cgroup is a bit much, given that
>>> nothing in the name or documentation of the function points to that.
>>>
>>> It bit us in the THP shrinker where that list head is embedded in a
>>> tailpage (virt_to_page(page) is fun to debug). And it caused some
>>> confusion in this case as well, where the zswap entry is a slab object
>>> but not charged (the entry descriptor is not attractive for cgroup
>>> accounting, only the backing memory it points to.)
>>
>> Hi,
>>
>> I have a question, maybe I missed something since I haven't read all
>> the earlier versions.
>>
>> IIUC, the problem here is that "zswap_entry" has different memcg and node
>> than the "page", so I wonder if we can just charge "zswap_entry" to the
>> same memcg of the "page".
>>
>> Like we can do these when allocating the "zswap_entry":
>>
>>         old_memcg = set_active_memcg(memcg)
>>         kmem_cache_alloc_lru(zswap_entry_cache, lru, gfp)
>>         set_active_memcg(old_memcg)
>>
>> The good points are:
>>
>> 1. "zswap_entry" is charged to the memcg of "page", which is more sensible?
>>
>> 2. We can reuse the kmem_cache_alloc_lru() interface, which makes code simpler
>>    since we don't need to manage list_lru_memcg by ourselves.
>>
>> 3. Maybe the new list_lru_add() and list_lru_del() are not needed anymore?
>>    Since the "zswap_entry" is of the same memcg and node with the "page".
>>    But don't know if THP shrinker still need it.
>>
>> Thanks!
> 
> That idea was considered in earlier iterations/discussions of the
> patch series as well. Charging things is not free - there is an
> overhead associated with it, which is why we are usually selective
> about whether to charge something. We were not super keen to do this
> for zswap_entry just to plumb around the list_lru's restriction. Might
> as well pay the price of extending the list_lru interface now.
> 
> If in the future, not charging the zswap entry causes a separate
> isolation issue, we could revisit this decision and charge it.
> Otherwise, IMHO we should just stick with this for now.
> 

Ok, I get it. Thanks much for your clear explanation!





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]     [Monitors]

  Powered by Linux