On Tue, Oct 3, 2023 at 2:24 AM Hugh Dickins <hughd@xxxxxxxxxx> wrote: > > v3.8 commit b24f53a0bea3 ("mm: mempolicy: Add MPOL_MF_LAZY") introduced > MPOL_MF_LAZY, and included it in the MPOL_MF_VALID flags; but a720094ded8 > ("mm: mempolicy: Hide MPOL_NOOP and MPOL_MF_LAZY from userspace for now") > immediately removed it from MPOL_MF_VALID flags, pending further review. > "This will need to be revisited", but it has not been reinstated. > > The present state is confusing: there is dead code in mm/mempolicy.c to > handle MPOL_MF_LAZY cases which can never occur. Remove that: it can be > resurrected later if necessary. But keep the definition of MPOL_MF_LAZY, > which must remain in the UAPI, even though it always fails with EINVAL. > > https://lore.kernel.org/linux-mm/1553041659-46787-1-git-send-email-yang.shi@xxxxxxxxxxxxxxxxx/ > links to a previous request to remove MPOL_MF_LAZY. Thanks for mentioning my work. I'm glad to see the dead code go away. Reviewed-by: Yang Shi <shy828301@xxxxxxxxx> > > Signed-off-by: Hugh Dickins <hughd@xxxxxxxxxx> > Reviewed-by: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> > --- > include/uapi/linux/mempolicy.h | 2 +- > mm/mempolicy.c | 18 ------------------ > 2 files changed, 1 insertion(+), 19 deletions(-) > > diff --git a/include/uapi/linux/mempolicy.h b/include/uapi/linux/mempolicy.h > index 046d0ccba4cd..a8963f7ef4c2 100644 > --- a/include/uapi/linux/mempolicy.h > +++ b/include/uapi/linux/mempolicy.h > @@ -48,7 +48,7 @@ enum { > #define MPOL_MF_MOVE (1<<1) /* Move pages owned by this process to conform > to policy */ > #define MPOL_MF_MOVE_ALL (1<<2) /* Move every page to conform to policy */ > -#define MPOL_MF_LAZY (1<<3) /* Modifies '_MOVE: lazy migrate on fault */ > +#define MPOL_MF_LAZY (1<<3) /* UNSUPPORTED FLAG: Lazy migrate on fault */ > #define MPOL_MF_INTERNAL (1<<4) /* Internal flags start here */ > > #define MPOL_MF_VALID (MPOL_MF_STRICT | \ > diff --git a/mm/mempolicy.c b/mm/mempolicy.c > index 5d99fd5cd60b..f3224a8b0f6c 100644 > --- a/mm/mempolicy.c > +++ b/mm/mempolicy.c > @@ -636,12 +636,6 @@ unsigned long change_prot_numa(struct vm_area_struct *vma, > > return nr_updated; > } > -#else > -static unsigned long change_prot_numa(struct vm_area_struct *vma, > - unsigned long addr, unsigned long end) > -{ > - return 0; > -} > #endif /* CONFIG_NUMA_BALANCING */ > > static int queue_pages_test_walk(unsigned long start, unsigned long end, > @@ -680,14 +674,6 @@ static int queue_pages_test_walk(unsigned long start, unsigned long end, > if (endvma > end) > endvma = end; > > - if (flags & MPOL_MF_LAZY) { > - /* Similar to task_numa_work, skip inaccessible VMAs */ > - if (!is_vm_hugetlb_page(vma) && vma_is_accessible(vma) && > - !(vma->vm_flags & VM_MIXEDMAP)) > - change_prot_numa(vma, start, endvma); > - return 1; > - } > - > /* > * Check page nodes, and queue pages to move, in the current vma. > * But if no moving, and no strict checking, the scan can be skipped. > @@ -1274,9 +1260,6 @@ static long do_mbind(unsigned long start, unsigned long len, > if (IS_ERR(new)) > return PTR_ERR(new); > > - if (flags & MPOL_MF_LAZY) > - new->flags |= MPOL_F_MOF; > - > /* > * If we are using the default policy then operation > * on discontinuous address spaces is okay after all > @@ -1321,7 +1304,6 @@ static long do_mbind(unsigned long start, unsigned long len, > > if (!err) { > if (!list_empty(&pagelist)) { > - WARN_ON_ONCE(flags & MPOL_MF_LAZY); > nr_failed |= migrate_pages(&pagelist, new_folio, NULL, > start, MIGRATE_SYNC, MR_MEMPOLICY_MBIND, NULL); > } > -- > 2.35.3 >