在 2022/4/15 4:53, Yu Zhao 写道:
On Thu, Apr 14, 2022 at 07:47:54PM +0800, Chen Wandun wrote:
On 2022/4/7 11:15, Yu Zhao wrote:
+static void inc_min_seq(struct lruvec *lruvec)
+{
+ int type;
+ struct lru_gen_struct *lrugen = &lruvec->lrugen;
+
+ VM_BUG_ON(!seq_is_valid(lruvec));
+
+ for (type = 0; type < ANON_AND_FILE; type++) {
+ if (get_nr_gens(lruvec, type) != MAX_NR_GENS)
+ continue;
I'm confused about relation between aging and LRU list operation.
In function inc_max_seq, both min_seq and max_seq will increase,
the lrugen->lists[] indexed by lru_gen_from_seq(max_seq + 1) may
be non-empty?
Yes.
for example,
before inc_max_seq:
min_seq == 0, lrugen->lists[0][type][zone]
max_seq ==3, lrugen->lists[3][type][zone]
after inc_max_seq:
min_seq ==1, lrugen->lists[1][type][zone]
max_seq ==4, lrugen->lists[0][type][zone]
If lrugen->lists[0][type][zone] is not empty before inc_max_seq and it is
the most inactive list,however lurgen->lists[0][type][zone] will become
the most active list after inc_max_seq.
Correct.
So, in this place,
if (get_nr_gens(lruvec, type) != MAX_NR_GENS)
continue;
should change to
if (get_nr_gens(lruvec, type) == MAX_NR_GENS)
continue;
No, because max/min_seq will overlap if we do so.
lrugen->lists[max_seq+1] can only be non-empty for anon LRU, for a
couple of reasons:
1. We can't swap at all.
2. Swapping is constrained, e.g., swapfile is full.
Both cases are similar to a producer (the aging) overrunning a
consumer (the eviction). We used to handle them, but I simplified the
code because I don't feel they are worth handling [1].
Can lrugen->lists[max_seq+1] also be non-empty for file LRU?
such as in dont reclaim mapped file page case(isolation will fail).
If so, after aging, eviction will reclaim memory start from
lrugen->lists[min_seq+1], but some oldest file page still
remain in lrugen->lists[max_seq+1].
sort_folio can help to put misplaced pages to the right
LRU list, but in this case, it does't help, because sort_folio
only sort lrugen->lists[min_seq+1].
Thanks
Wandun
[1] https://lore.kernel.org/r/CAOUHufbDfwgm8PgCGkhCjbhMbm=fekfjgRR56NL-j+5iUGfVuw@xxxxxxxxxxxxxx/
.