On Mon, Jul 22, 2024 at 02:32:03PM GMT, Shakeel Butt wrote: > On Mon, Jul 22, 2024 at 01:12:35PM GMT, Yosry Ahmed wrote: > > On Mon, Jul 22, 2024 at 1:02 PM Shakeel Butt <shakeel.butt@xxxxxxxxx> wrote: > > > > > > On Fri, Jul 19, 2024 at 09:52:17PM GMT, Yosry Ahmed wrote: > > > > On Fri, Jul 19, 2024 at 3:48 PM Shakeel Butt <shakeel.butt@xxxxxxxxx> wrote: > > > > > > > > > > On Fri, Jul 19, 2024 at 09:54:41AM GMT, Jesper Dangaard Brouer wrote: > > > > > > > > > > > > > > > > > > On 19/07/2024 02.40, Shakeel Butt wrote: > > > > > > > Hi Jesper, > > > > > > > > > > > > > > On Wed, Jul 17, 2024 at 06:36:28PM GMT, Jesper Dangaard Brouer wrote: > > > > > > > > > > > > > > > [...] > > > > > > > > > > > > > > > > > > > > > > > > Looking at the production numbers for the time the lock is held for level 0: > > > > > > > > > > > > > > > > @locked_time_level[0]: > > > > > > > > [4M, 8M) 623 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ | > > > > > > > > [8M, 16M) 860 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@| > > > > > > > > [16M, 32M) 295 |@@@@@@@@@@@@@@@@@ | > > > > > > > > [32M, 64M) 275 |@@@@@@@@@@@@@@@@ | > > > > > > > > > > > > > > > > > > > > > > Is it possible to get the above histogram for other levels as well? > > > > > > > > > > > > Data from other levels available in [1]: > > > > > > [1] > > > > > > https://lore.kernel.org/all/8c123882-a5c5-409a-938b-cb5aec9b9ab5@xxxxxxxxxx/ > > > > > > > > > > > > IMHO the data shows we will get most out of skipping level-0 root-cgroup > > > > > > flushes. > > > > > > > > > > > > > > > > Thanks a lot of the data. Are all or most of these locked_time_level[0] > > > > > from kswapds? This just motivates me to strongly push the ratelimited > > > > > flush patch of mine (which would be orthogonal to your patch series). > > > > > > > > Jesper and I were discussing a better ratelimiting approach, whether > > > > it's measuring the time since the last flush, or only skipping if we > > > > have a lot of flushes in a specific time frame (using __ratelimit()). > > > > I believe this would be better than the current memcg ratelimiting > > > > approach, and we can remove the latter. > > > > > > > > WDYT? > > > > > > The last statement gives me the impression that you are trying to fix > > > something that is not broken. The current ratelimiting users are ok, the > > > issue is with the sync flushers. Or maybe you are suggesting that the new > > > ratelimiting will be used for all sync flushers and current ratelimiting > > > users and the new ratelimiting will make a good tradeoff between the > > > accuracy and potential flush stall? > > > > The latter. Basically the idea is to have more informed and generic > > ratelimiting logic in the core rstat flushing code (e.g. using > > __ratelimit()), which would apply to ~all flushers*. Then, we ideally > > wouldn't need mem_cgroup_flush_stats_ratelimited() at all. > > > > I wonder if we really need a universal ratelimit. As you noted below > there are cases where we want exact stats and then we know there are > cases where accurate stats are not needed but they are very performance > sensitive. Aiming to have a solution which will ignore such differences > might be a futile effort. > BTW I am not against it. If we can achieve this with minimal regression and maintainence burden then it would be preferable.