On 2 May 2023 14:20:54 -0700 Douglas Anderson <dianders@xxxxxxxxxxxx> > On Sun, Apr 30, 2023 at 1:53=E2=80=AFAM Hillf Danton <hdanton@xxxxxxxx> wrote: > > On 28 Apr 2023 13:54:38 -0700 Douglas Anderson <dianders@xxxxxxxxxxxx> > > > The MIGRATE_SYNC_LIGHT mode is intended to block for things that will > > > finish quickly but not for things that will take a long time. Exactly > > > how long is too long is not well defined, but waits of tens of > > > milliseconds is likely non-ideal. > > > > > > When putting a Chromebook under memory pressure (opening over 90 tabs > > > on a 4GB machine) it was fairly easy to see delays waiting for some > > > locks in the kcompactd code path of > 100 ms. While the laptop wasn't > > > amazingly usable in this state, it was still limping along and this > > > state isn't something artificial. Sometimes we simply end up with a > > > lot of memory pressure. > > > > Given longer than 100ms stall, this can not be a correct fix if the > > hardware fails to do more than ten IOs a second. > > > > OTOH given some pages reclaimed for compaction to make forward progress > > before kswapd wakes kcompactd up, this can not be a fix without spotting > > the cause of the stall. > > Right that the system is in pretty bad shape when this happens and > it's not very effective at doing IO or much of anything because it's > under bad memory pressure. Based on the info in another reply [1] | I put some more traces in and reproduced it again. I saw something | that looked like this: | | 1. balance_pgdat() called wakeup_kcompactd() with order=10 and that | caused us to get all the way to the end and wakeup kcompactd (there | were previous calls to wakeup_kcompactd() that returned early). | | 2. kcompactd started and completed kcompactd_do_work() without blocking. | | 3. kcompactd called proactive_compact_node() and there blocked for | ~92ms in one case, ~120ms in another case, ~131ms in another case. I see fragmentation given order=10 and proactive_compact_node(). Can you specify the evidence of bad memory pressure? [1] https://lore.kernel.org/lkml/CAD=FV=V8m-mpJsFntCciqtq7xnvhmnvPdTvxNuBGBT3-cDdabQ@xxxxxxxxxxxxxx/ > > I guess my first thought is that, when this happens then a process > holding the lock gets preempted and doesn't get scheduled back in for > a while. That _should_ be possible, right? In the case where I'm > reproducing this then all the CPUs would be super busy madly trying to > compress / decompress zram, so it doesn't surprise me that a process > could get context switched out for a while. Could switchout turn the below I/O upside down? /* * In "light" mode, we can wait for transient locks (eg * inserting a page into the page table), but it's not * worth waiting for I/O. */