On Thu, Mar 26, 2020 at 10:31 PM Johannes Weiner <hannes@xxxxxxxxxxx> wrote: > > On Thu, Mar 26, 2020 at 07:12:05AM -0400, Yafang Shao wrote: > > PSI gives us a powerful way to anaylze memory pressure issue, but we can > > make it more powerful with the help of tracepoint, kprobe, ebpf and etc. > > Especially with ebpf we can flexiblely get more details of the memory > > pressure. > > > > In orderc to achieve this goal, a new parameter is added into > > psi_memstall_{enter, leave}, which indicates the specific type of a > > memstall. There're totally ten memstalls by now, > > MEMSTALL_KSWAPD > > MEMSTALL_RECLAIM_DIRECT > > MEMSTALL_RECLAIM_MEMCG > > MEMSTALL_RECLAIM_HIGH > > MEMSTALL_KCOMPACTD > > MEMSTALL_COMPACT > > MEMSTALL_WORKINGSET_REFAULT > > MEMSTALL_WORKINGSET_THRASHING > > MEMSTALL_MEMDELAY > > MEMSTALL_SWAPIO > > What does this provide over the events tracked in /proc/vmstats? > /proc/vmstat only tells us which events occured, but it can't tell us how long these events take. Sometimes we really want to know how long the event takes and PSI can provide us the data For example, in the past days when I did performance tuning for a database service, I monitored that the latency spike is related with the workingset_refault counter in /proc/vmstat, and at that time I really want to know the spread of latencies caused by workingset_refault, but there's no easy way to get it. Now with newly added MEMSTALL_WORKINGSET_REFAULT, I can get the latencies caused by workingset refault. > Can you elaborate a bit how you are using this information? It's not > quite clear to me from the example in patch #2. > >From the traced data in patch #2, we can find that the high latencies of user tasks are always type 7 of memstall , which is MEMSTALL_WORKINGSET_THRASHING, and then we should look into the details of wokingset of the user tasks and think about how to improve it - for example, by reducing the workingset. BTW, there's some error in the definition of show_psi_memstall_type() in patch #2 ( that's an old version), I will correct it. To summarize, with the pressure data in /proc/pressure/memroy we know that the system is under memory pressure, and then with the newly added tracing facility in this patchset we can get the reason of this memory pressure, and then thinks about how to make the change. The workflow can be illustrated as bellow. REASON ACTION | compaction | look into the details of compaction | Memory pressure - | vmscan | look into the details of vmscan | | workingset | look into the details of workingset | | etc | ... | Thanks Yafang