Hello, Thank you for your reply. On Wed, Nov 8, 2023 at 3:33 AM Johannes Weiner <hannes@xxxxxxxxxxx> wrote: > > On Mon, Aug 07, 2023 at 07:01:16PM +0900, Hyeongtak Ji wrote: > > shrink_lruvec() currently ignores previously reclaimed pages in > > scan_control->nr_reclaimed. This can lead shrink_lruvec() to reclaiming > > more pages than expected. > > > > This patch fixes shrink_lruvec() to take into account the previously > > reclaimed pages. > > Do you run into real world issues from this? The code has been like > this for at least a decade. > I believed this was merely a misinitialization that resulted in shrink_lruvec() reclaiming more pages than intended. However, I do acknowledge that there have not been any real world issues arising from this behavior. > It's an intentional choice to ensure fairness across all visited > cgroups. sc->nr_to_reclaim is 32 pages or less - it's only to guard sc->nr_to_reclaim can be larger than 32 (e.g., about 5K) in the case that I was worrying about. kswapd_shrink_node() in mm/vmscan.c sets the value and it is passed down to shrink_lruvec(). > against extreme overreclaim. But we want to make sure we reclaim a bit > from all cgroups, rather than always hit the first one and then bail.