Re: [PATCH] memcg: provide reclaim stats via 'memory.reclaim'

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, May 19, 2022 at 1:51 AM Vaibhav Jain <vaibhav@xxxxxxxxxxxxx> wrote:
>
> Hi,
>
> Thanks for looking into this patch,
>
> Yosry Ahmed <yosryahmed@xxxxxxxxxx> writes:
>
> > On Wed, May 18, 2022 at 3:38 PM Vaibhav Jain <vaibhav@xxxxxxxxxxxxx> wrote:
> >>
> >> [1] Provides a way for user-space to trigger proactive reclaim by introducing
> >> a write-only memcg file 'memory.reclaim'. However reclaim stats like number
> >> of pages scanned and reclaimed is still not directly available to the
> >> user-space.
> >>
> >> This patch proposes to extend [1] to make the memcg file 'memory.reclaim'
> >> readable which returns the number of pages scanned / reclaimed during the
> >> reclaim process from 'struct vmpressure' associated with each memcg. This should
> >> let user-space asses how successful proactive reclaim triggered from memcg
> >> 'memory.reclaim' was ?
> >
> > Isn't this a racy read? struct vmpressure can be changed between the
> > write and read by other reclaim operations, right?
> Read/write of vmpr stats is always done in context of vmpr->sr_lock
> which is also the case for this patch. So not sure how the read is racy
> ?.

I didn't mean that you can read the value while it is being changed. I
meant that between writing to memory.reclaim and reading from it,
another reclaim operation could modify memcg vmpressure. A sequence
like this:
1) Write to memory.reclaim
2) Kernel coincidentally runs reclaim on that memcg
3) Read from memory.reclaim

The result would be that you are reading the stats of another reclaim
operation, not the one invoked by writing to memory.reclaim.

>
> >
> > I was actually planning to send a patch that does not updated
> > vmpressure for user-controller reclaim, similar to how PSI is handled.
> >
> Ok, not sure if I am inferring correctly as to how how that would be
> useful. Can you please provide some more context.

IIUC vmpressure is used as an indicator for memory pressure. In my
opinion it makes sense if vmpressure is not changed on reclaim
operations directly invoked by the user, as they are not directly
related to whether the system is under memory pressure or not. PSI is
handled in a similar way. See e22c6ed90aa9 ("mm: memcontrol: don't
count limit-setting reclaim as
memory pressure").

>
> The primary motivation for this patch was to expose the vmpressure stats
> to user space that are available with cgroup-v1 but not with cgroup-v2
> AFAIK

If the main goal is exposing vmpressure, regardless of proactive
reclaim, this is something else. AFAIK vmpressure is not popular
anymore and PSI is the more recent/better indicator.

>
> > The interface currently returns -EBUSY if the entire amount was not
> > reclaimed, so isn't this enough to figure out if it was successful or
> > not?
> Userspace may very well want to know the amount of memory that was
> partially reclaimed even though write to "memory.reclaim" returned
> '-EBUSY'. This feedback can be useful info for implementing a retry
> loop.
>
> > If not, we can store the scanned / reclaim counts of the last
> > memory.reclaim invocation for the sole purpose of memory.reclaim
> > reads.
> Sure sounds reasonable to me.
>
> > Maybe it is actually more intuitive to users to just read the
> > amount of memory read? In a format that is similar to the one written?
> >
> > i.e
> > echo "10M" > memory.reclaim
> > cat memory.reclaim
> > 9M
> >
> Agree, I will address that in v2.
>
> <snip>
>
> --
> Cheers
> ~ Vaibhav




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux