On Wed, 21 Mar 2018, Andrey Ryabinin wrote: > >>> It would probably be best to limit the > >>> nr_pages to the amount that needs to be reclaimed, though, rather than > >>> over reclaiming. > >> > >> How do you achieve that? The charging path is not synchornized with the > >> shrinking one at all. > >> > > > > The point is to get a better guess at how many pages, up to > > SWAP_CLUSTER_MAX, that need to be reclaimed instead of 1. > > > >>> If you wanted to be invasive, you could change page_counter_limit() to > >>> return the count - limit, fix up the callers that look for -EBUSY, and > >>> then use max(val, SWAP_CLUSTER_MAX) as your nr_pages. > >> > >> I am not sure I understand > >> > > > > Have page_counter_limit() return the number of pages over limit, i.e. > > count - limit, since it compares the two anyway. Fix up existing callers > > and then clamp that value to SWAP_CLUSTER_MAX in > > mem_cgroup_resize_limit(). It's a more accurate guess than either 1 or > > 1024. > > > > JFYI, it's never 1, it's always SWAP_CLUSTER_MAX. > See try_to_free_mem_cgroup_pages(): > .... > struct scan_control sc = { > .nr_to_reclaim = max(nr_pages, SWAP_CLUSTER_MAX), > Is SWAP_CLUSTER_MAX the best answer if I'm lowering the limit by 1GB? -- To unsubscribe from this list: send the line "unsubscribe cgroups" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html