"Huang, Ying" <ying.huang@xxxxxxxxx> writes: > Alistair Popple <apopple@xxxxxxxxxx> writes: > >> "Huang, Ying" <ying.huang@xxxxxxxxx> writes: >> >>> [snip] >>> >>>> >>>>>>> @@ -1462,30 +1549,28 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, >>>>>>> nr_retry_pages = 0; >>>>>>> >>>>>>> list_for_each_entry_safe(folio, folio2, from, lru) { >>>>>>> + if (folio_test_hugetlb(folio)) { >>>>>> >>>>>> How do we hit this case? Shouldn't migrate_hugetlbs() have already moved >>>>>> any hugetlb folios off the from list? >>>>> >>>>> Retried hugetlb folios will be kept in from list. >>>> >>>> Couldn't migrate_hugetlbs() remove the failing retried pages from the >>>> list on the final pass? That seems cleaner to me. >>> >>> To do that, we need to go through the folio list again to remove all >>> hugetlb pages. It could be time-consuming in some cases. So I think >>> that it's better to keep this. >> >> Why? Couldn't we test pass == 9 and remove it from the list if it fails >> the final retry in migrate_hugetlbs()? In any case if it's on the list >> due to failed retries we have already passed over it 10 times, so the >> extra loop hardly seems like a problem. > > Yes. That's possible. But "test pass == 9" looks more tricky than the > current code. > > Feel free to change the code as you suggested on top this series. If no > others object, I'm OK with that. OK? Sure. Part of my problem when reviewing this series is that everytime I look at migrate_pages(), and in particular the number of conditionals that are sufficiently non-obvious to require extensive comments, I can't help but think it all needs some refactoring before making it any more complicated. However perhaps I am alone in that. Either way this kind of refactoring has been on my TODO list for a while - I have a WIP series to converge some of the migrate_device.c code which I will need to rebase on this anyway so as you suggest I could make a lot of my suggested changes on top of this series. Regards, Alistair > Best Regards, > Huang, Ying > >>> >>>>>>> + list_move_tail(&folio->lru, &ret_folios); >>>>>>> + continue; >>>>>>> + } >>>>>>> + >>>>>>> /* >>>>>>> * Large folio statistics is based on the source large >>>>>>> * folio. Capture required information that might get >>>>>>> * lost during migration. >>>>>>> */ >>>>>>> - is_large = folio_test_large(folio) && !folio_test_hugetlb(folio); >>>>>>> + is_large = folio_test_large(folio); >>>>>>> is_thp = is_large && folio_test_pmd_mappable(folio); >>>>>>> nr_pages = folio_nr_pages(folio); >>>>>>> + >>>>>>> cond_resched(); >>>>>>> >>>>>>> - if (folio_test_hugetlb(folio)) >>>>>>> - rc = unmap_and_move_huge_page(get_new_page, >>>>>>> - put_new_page, private, >>>>>>> - &folio->page, pass > 2, mode, >>>>>>> - reason, >>>>>>> - &ret_folios); >>>>>>> - else >>>>>>> - rc = unmap_and_move(get_new_page, put_new_page, >>>>>>> - private, folio, pass > 2, mode, >>>>>>> - reason, &ret_folios); >>>>>>> + rc = unmap_and_move(get_new_page, put_new_page, >>>>>>> + private, folio, pass > 2, mode, >>>>>>> + reason, &ret_folios); >>>>>>> /* >>>>>>> * The rules are: >>>>>>> - * Success: non hugetlb folio will be freed, hugetlb >>>>>>> - * folio will be put back >>>>>>> + * Success: folio will be freed >>>>>>> * -EAGAIN: stay on the from list >>>>>>> * -ENOMEM: stay on the from list >>>>>>> * -ENOSYS: stay on the from list >>>>>>> @@ -1512,7 +1597,6 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, >>>>>>> stats.nr_thp_split += is_thp; >>>>>>> break; >>>>>>> } >>>>>>> - /* Hugetlb migration is unsupported */ >>>>>>> } else if (!no_split_folio_counting) { >>>>>>> nr_failed++; >>>>>>> }