Re: [RFC PATCH] cgroup: introduce dynamic protection for memcg

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Apr 1, 2022 at 3:26 AM Suren Baghdasaryan <surenb@xxxxxxxxxx> wrote:
>
> On Thu, Mar 31, 2022 at 4:35 AM Michal Hocko <mhocko@xxxxxxxx> wrote:
> >
> > On Thu 31-03-22 19:18:58, Zhaoyang Huang wrote:
> > > On Thu, Mar 31, 2022 at 5:01 PM Michal Hocko <mhocko@xxxxxxxx> wrote:
> > > >
> > > > On Thu 31-03-22 16:00:56, zhaoyang.huang wrote:
> > > > > From: Zhaoyang Huang <zhaoyang.huang@xxxxxxxxxx>
> > > > >
> > > > > For some kind of memcg, the usage is varies greatly from scenarios. Such as
> > > > > multimedia app could have the usage range from 50MB to 500MB, which generated
> > > > > by loading an special algorithm into its virtual address space and make it hard
> > > > > to protect the expanded usage without userspace's interaction.
> > > >
> > > > Do I get it correctly that the concern you have is that you do not know
> > > > how much memory your workload will need because that depends on some
> > > > parameters?
> > > right. such as a camera APP will expand the usage from 50MB to 500MB
> > > because of launching a special function(face beauty etc need special
> > > algorithm)
> > > >
> > > > > Furthermore, fixed
> > > > > memory.low is a little bit against its role of soft protection as it will response
> > > > > any system's memory pressure in same way.
> > > >
> > > > Could you be more specific about this as well?
> > > As the camera case above, if we set memory.low as 200MB to keep the
> > > APP run smoothly, the system will experience high memory pressure when
> > > another high load APP launched simultaneously. I would like to have
> > > camera be reclaimed under this scenario.
> >
> > OK, so you effectivelly want to keep the memory protection when there is
> > a "normal" memory pressure but want to relax the protection on other
> > high memory utilization situations?
> >
> > How do you exactly tell a difference between a steady memory pressure
> > (say stream IO on the page cache) from "high load APP launched"? Should
> > you reduce the protection on the stram IO situation as well?
>
> IIUC what you are implementing here is a "memory allowance boost"
> feature and it seems you are implementing it entirely inside the
> kernel, while only userspace knows when to apply this boost (say at
> app launch time). This does not make sense to me.
I am wondering if it could be more helpful to apply this patch on the
background services(system_server etc) than APP, while the latter ones
are persistent to the system.
>
> >
> > [...]
> > > > One very important thing that I am missing here is the overall objective of this
> > > > tuning. From the above it seems that you want to (ab)use memory->low to
> > > > protect some portion of the charged memory and that the protection
> > > > shrinks over time depending on the the global PSI metrict and time.
> > > > But why this is a good thing?
> > > 'Good' means it meets my original goal of keeping the usage during a
> > > period of time and responding to the system's memory pressure. For an
> > > android like system, memory is almost forever being in a tight status
> > > no matter how many RAM it has. What we need from memcg is more than
> > > control and grouping, we need it to be more responsive to the system's
> > > load and could  sacrifice its usage  under certain criteria.
> >
> > Why existing tools/APIs are insufficient for that? You can watch for
> > both global and memcg memory pressure including PSI metrics and update
> > limits dynamically. Why is it necessary to put such a logic into the
> > kernel?
>
> I had exactly the same thought while reading through this.
> In Android you would probably need to implement a userspace service
> which would temporarily relax the memcg limits when required, monitor
> PSI levels and adjust the limits accordingly.
As my response to Michal's comment. Userspace monitors introduce
latency. Take LMKD as an example, it is actually driven by the
PSI_POLL_PERIOD_XXX_MS after first wakeup, which means
PSI_WINDOW_SIZE_MS could be too big to rely on. IMHO, with regards to
the responding time, LMKD is less efficient than lmk driver but more
strong in strategy things. I would like to test this patch in real
android's work load and feedback in next version.
>
> >
> > --
> > Michal Hocko
> > SUSE Labs




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux