Re: [PATCH V2] cgroup/rstat: Avoid thundering herd problem by kswapd across NUMA nodes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 25/06/2024 11.28, Yosry Ahmed wrote:
On Mon, Jun 24, 2024 at 5:24 PM Shakeel Butt <shakeel.butt@xxxxxxxxx> wrote:

On Mon, Jun 24, 2024 at 03:21:22PM GMT, Yosry Ahmed wrote:
On Mon, Jun 24, 2024 at 3:17 PM Shakeel Butt <shakeel.butt@xxxxxxxxx> wrote:

On Mon, Jun 24, 2024 at 02:43:02PM GMT, Yosry Ahmed wrote:
[...]

There is also
a heuristic in zswap that may writeback more (or less) pages that it
should to the swap device if the stats are significantly stale.


Is this the ratio of MEMCG_ZSWAP_B and MEMCG_ZSWAPPED in
zswap_shrinker_count()? There is already a target memcg flush in that
function and I don't expect root memcg flush from there.

I was thinking of the generic approach I suggested, where we can avoid
contending on the lock if the cgroup is a descendant of the cgroup
being flushed, regardless of whether or not it's the root memcg. I
think this would be more beneficial than just focusing on root
flushes.

Yes I agree with this but what about skipping the flush in this case?
Are you ok with that?

Sorry if I am confused, but IIUC this patch affects all root flushes,
even for userspace reads, right? In this case I think it's not okay to
skip the flush without waiting for the ongoing flush.

So, we differentiate between userspace and in-kernel users. For
userspace, we should not skip flush and for in-kernel users, we can skip
if flushing memcg is the ancestor of the given memcg. Is that what you
are saying?

Basically, I prefer that we don't skip flushing at all and keep
userspace and in-kernel users the same. We can use completions to make
other overlapping flushers sleep instead of spin on the lock.


I think there are good reasons for skipping flushes for userspace when reading these stats. More below.

I'm looking at kernel code to spot cases where the flush MUST to be
completed before returning.  There are clearly cases where we don't need
100% accurate stats, evident by mem_cgroup_flush_stats_ratelimited() and
mem_cgroup_flush_stats() that use memcg_vmstats_needs_flush().

The cgroup_rstat_exit() call seems to depend on cgroup_rstat_flush() being strict/accurate, because need to free the percpu resources.

The obj_cgroup_may_zswap() have a comments that says it needs to get accurate stats for charging.

These were the two cases, I found, do you know of others?


A proof of concept is basically something like:

void cgroup_rstat_flush(cgroup)
{
     if (cgroup_is_descendant(cgroup, READ_ONCE(cgroup_under_flush))) {
         wait_for_completion_interruptible(&cgroup_under_flush->completion);
         return;
     }

This feels like what we would achieve by changing this to a mutex.


     __cgroup_rstat_lock(cgrp, -1);
     reinit_completion(&cgroup->completion);
     /* Any overlapping flush requests after this write will not spin
on the lock */
     WRITE_ONCE(cgroup_under_flush, cgroup);

     cgroup_rstat_flush_locked(cgrp);
     complete_all(&cgroup->completion);
     __cgroup_rstat_unlock(cgrp, -1);
}

There may be missing barriers or chances to reduce the window between
__cgroup_rstat_lock and WRITE_ONCE(), but that's what I have in mind.
I think it's not too complicated, but we need to check if it fixes the
problem.

If this is not preferable, then yeah, let's at least keep the
userspace behavior intact. This makes sure we don't affect userspace
negatively, and we can change it later as we please.

I don't think userspace reading these stats need to be 100% accurate.
We are only reading the io.stat, memory.stat and cpu.stat every 53 seconds. Reading cpu.stat print stats divided by NSEC_PER_USEC (1000).

If userspace is reading these very often, then they will be killing the system as it disables IRQs.

On my prod system the flush of root cgroup can take 35 ms, which is not good, but this inaccuracy should not matter for userspace.

Please educate me on why we need accurate userspace stats?


--Jesper




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux