Re: [PATCH] mm/vmscan: respect cpuset policy during page demotion

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Feng Tang <feng.tang@xxxxxxxxx> writes:

> On Thu, Oct 27, 2022 at 01:57:52AM +0800, Yang Shi wrote:
>> On Wed, Oct 26, 2022 at 8:59 AM Michal Hocko <mhocko@xxxxxxxx> wrote:
> [...]
>> > > > This all can get quite expensive so the primary question is, does the
>> > > > existing behavior generates any real issues or is this more of an
>> > > > correctness exercise? I mean it certainly is not great to demote to an
>> > > > incompatible numa node but are there any reasonable configurations when
>> > > > the demotion target node is explicitly excluded from memory
>> > > > policy/cpuset?
>> > >
>> > > We haven't got customer report on this, but there are quite some customers
>> > > use cpuset to bind some specific memory nodes to a docker (You've helped
>> > > us solve a OOM issue in such cases), so I think it's practical to respect
>> > > the cpuset semantics as much as we can.
>> >
>> > Yes, it is definitely better to respect cpusets and all local memory
>> > policies. There is no dispute there. The thing is whether this is really
>> > worth it. How often would cpusets (or policies in general) go actively
>> > against demotion nodes (i.e. exclude those nodes from their allowes node
>> > mask)?
>> >
>> > I can imagine workloads which wouldn't like to get their memory demoted
>> > for some reason but wouldn't it be more practical to tell that
>> > explicitly (e.g. via prctl) rather than configuring cpusets/memory
>> > policies explicitly?
>> >
>> > > Your concern about the expensive cost makes sense! Some raw ideas are:
>> > > * if the shrink_folio_list is called by kswapd, the folios come from
>> > >   the same per-memcg lruvec, so only one check is enough
>> > > * if not from kswapd, like called form madvise or DAMON code, we can
>> > >   save a memcg cache, and if the next folio's memcg is same as the
>> > >   cache, we reuse its result. And due to the locality, the real
>> > >   check is rarely performed.
>> >
>> > memcg is not the expensive part of the thing. You need to get from page
>> > -> all vmas::vm_policy -> mm -> task::mempolicy
>> 
>> Yeah, on the same page with Michal. Figuring out mempolicy from page
>> seems quite expensive and the correctness can't be guranteed since the
>> mempolicy could be set per-thread and the mm->task depends on
>> CONFIG_MEMCG so it doesn't work for !CONFIG_MEMCG.
>
> Yes, you are right. Our "working" psudo code for mem policy looks like
> what Michal mentioned, and it can't work for all cases, but try to
> enforce it whenever possible:
>
> static bool  __check_mpol_demotion(struct folio *folio, struct vm_area_struct *vma,
> 		unsigned long addr, void *arg)
> {
> 	bool *skip_demotion = arg;
> 	struct mempolicy *mpol;
> 	int nid, dnid;
> 	bool ret = true;
>
> 	mpol = __get_vma_policy(vma, addr);
> 	if (!mpol) {
> 		struct task_struct *task;

                task = NULL;

> 		if (vma->vm_mm)
> 			task = vma->vm_mm->owner;
>
> 		if (task) {
> 			mpol = get_task_policy(task);
> 			if (mpol)
> 				mpol_get(mpol);
> 		}
> 	}
>
> 	if (!mpol)
> 		return ret;
>
> 	if (mpol->mode != MPOL_BIND)
> 		goto put_exit;
>
> 	nid = folio_nid(folio);
> 	dnid = next_demotion_node(nid);
> 	if (!node_isset(dnid, mpol->nodes)) {
> 		*skip_demotion = true;
> 		ret = false;
> 	}

I think that you need to get a node mask instead.  Even if
!node_isset(dnid, mpol->nodes), you may demote to other node in the node
mask.

Best Regards,
Huang, Ying

>
> put_exit:
> 	mpol_put(mpol);
> 	return ret;
> }
> 	
> static unsigned int shrink_page_list(struct list_head *page_list,..)
> {
> 	...
>
> 	bool skip_demotion = false;
> 	struct rmap_walk_control rwc = {
> 		.arg = &skip_demotion,
> 		.rmap_one = __check_mpol_demotion,
> 	};
>
> 	/* memory policy check */
> 	rmap_walk(folio, &rwc);
> 	if (skip_demotion)
> 		goto keep_locked;
> }
>
> And there seems to be no simple solution for getting the memory
> policy from a page.
>
> Thanks,
> Feng
>
>> >
>> > --
>> > Michal Hocko
>> > SUSE Labs
>> >
>> 




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux