On Mon, Jan 27, 2020 at 6:49 AM Michal Hocko <mhocko@xxxxxxxxxx> wrote: > > On Sun 26-01-20 11:53:55, Cong Wang wrote: > > On Tue, Jan 21, 2020 at 1:00 AM Michal Hocko <mhocko@xxxxxxxxxx> wrote: > > > > > > On Mon 20-01-20 14:48:05, Cong Wang wrote: > > > > It got stuck somewhere along the call path of mem_cgroup_try_charge(), > > > > and the trace events of mm_vmscan_lru_shrink_inactive() indicates this > > > > too: > > > > > > So it seems that you are condending on the page lock. It is really > > > unexpected that the reclaim would take that long though. Please try to > > > enable more vmscan tracepoints to see where the time is spent. > > > > Sorry for the delay. I have been trying to collect more data in one shot. > > > > This is a a typical round of the loop after enabling all vmscan tracepoints: > > > > <...>-455450 [007] .... 4048595.842992: > > mm_vmscan_memcg_reclaim_begin: order=0 may_writepage=1 > > gfp_flags=GFP_NOFS|__GFP_HIGHMEM|__GFP_HARDWALL|__GFP_MOVABLE > > classzone_idx=4 > > <...>-455450 [007] .... 4048595.843012: > > mm_vmscan_memcg_reclaim_end: nr_reclaimed=0 > > This doesn't tell us much though. This reclaim round has taken close to > no time. See timestamps. > > > The whole trace output is huge (33M), I can provide it on request. > > Focus on reclaim rounds that take a long time and see where it gets you. I reviewed the tracing output with my eyes, they all took little time. But of course I can't review all of them given the size is huge. For me, it seems that the loop happens in its caller, something like: retry: mm_vmscan_memcg_reclaim_begin(); ... mm_vmscan_memcg_reclaim_end(); goto retry; So, I think we should focus on try_charge()? More interestingly, the margin of that memcg is 0: $ sudo cat /sys/fs/cgroup/memory/system.slice/osqueryd.service/memory.usage_in_bytes 262144000 $ sudo cat /sys/fs/cgroup/memory/system.slice/osqueryd.service/memory.limit_in_bytes 262144000 Thanks!