Re: MGLRU premature memcg OOM on slow writes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Mar 15, 2024 at 10:38:31AM +0800, Yafang Shao wrote:
> On Fri, Mar 15, 2024 at 6:23 AM Yu Zhao <yuzhao@xxxxxxxxxx> wrote:
> > I'm surprised to see there was 0 pages under writeback:
> >   [Wed Mar 13 11:16:48 2024] total_writeback 0
> > What's your dirty limit?
> 
> The background dirty threshold is 2G, and the dirty threshold is 4G.
> 
>     sysctl -w vm.dirty_background_bytes=$((1024 * 1024 * 1024 * 2))
>     sysctl -w vm.dirty_bytes=$((1024 * 1024 * 1024 * 4))
> 
> >
> > It's unfortunate that the mainline has no per-memcg dirty limit. (We
> > do at Google.)
> 
> Per-memcg dirty limit is a useful feature. We also support it in our
> local kernel, but we didn't enable it for this test case.
> It is unclear why the memcg maintainers insist on rejecting the
> per-memcg dirty limit :(

I don't think that assessment is fair. It's just that nobody has
seriously proposed it (at least not that I remember) since the
cgroup-aware writeback was merged in 2015.

We run millions of machines with different workloads, memory sizes,
and IO devices, and don't feel the need to tune the settings for the
global dirty limits away from the defaults.

Cgroups allot those allowances in proportion to observed writeback
speed and available memory in the container. We set IO rate and memory
limits per container, and it adapts as necessary.

If you have an actual usecase, I'm more than willing to hear you
out. I'm sure that the other maintainers feel the same.

If you're proposing it as a workaround for cgroup1 being
architecturally unable to implement proper writeback cache management,
then it's a more difficult argument. That's one of the big reasons why
cgroup2 exists after all.

> > > As of now, it appears that the most effective solution to address this
> > > issue is to revert the commit 14aa8b2d5c2e. Regarding this commit
> > > 14aa8b2d5c2e,  its original intention was to eliminate potential SSD
> > > wearout, although there's no concrete data available on how it might
> > > impact SSD longevity. If the concern about SSD wearout is purely
> > > theoretical, it might be reasonable to consider reverting this commit.
> >
> > The SSD wearout problem was real -- it wasn't really due to
> > wakeup_flusher_threads() itself; rather, the original MGLRU code call
> > the function improperly. It needs to be called under more restricted
> > conditions so that it doesn't cause the SDD wearout problem again.
> > However, IMO, wakeup_flusher_threads() is just another bandaid trying
> > to work around a more fundamental problem. There is no guarantee that
> > the flusher will target the dirty pages in the memcg under reclaim,
> > right?
> 
> Right, it is a system-wide fluser.

Is it possible it was woken up just too frequently?

Conventional reclaim wakes it based on actually observed dirty pages
off the LRU. I'm not super familiar with MGLRU, but it looks like it
woke it on every generational bump? That might indeed be too frequent,
and doesn't seem related to the writeback cache state.

We're monitoring write rates quite closely due to wearout concern as
well, especially because we use disk swap too. This is the first time
I'm hearing about reclaim-driven wakeups being a concern. (The direct
writepage calls were a huge problem. But not waking the flushers.)

Frankly, I don't think the issue is fixable without bringing the
wakeup back in some form. Even if you had per-cgroup dirty limits. As
soon as you have non-zero dirty pages, you can produce allocation
patterns that drive reclaim into them before background writeback
kicks in.

If reclaim doesn't wake the flushers and waits for writeback, the
premature OOM margin is the size of the background limit - 1.

Yes, cgroup1 and cgroup2 react differently to seeing pages under
writeback: cgroup1 does wait_on_page_writeback(); cgroup2 samples
batches of pages and throttles at a higher level. But both of them
need the flushers woken, or there is nothing to wait for.

Unless you want to wait for dirty expiration :)




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]     [Monitors]

  Powered by Linux