On 7/9/24 7:12 PM, Christoph Lameter (Ampere) wrote: > On Thu, 28 Dec 2023, Matthew Wilcox (Oracle) wrote: > >> For no apparent reason, we were open-coding alloc_pages_node() in >> this function. > > The reason is that alloc_pages() follow memory policies, cgroup restrictions > etc etc and alloc_pages_node does not. > > With this patch cgroup restrictions memory policies etc etc no longer work > in the slab allocator. The only difference is memory policy from get_task_policy(), and the rest is the same, right? > Please revert this patch. But this only affects new slab page allocation, while getting objects from existing slabs isn't subject to memory policies, so now it's at least consistent? Do you have some use case where it matters? >> diff --git a/mm/slub.c b/mm/slub.c >> index 35aa706dc318..342545775df6 100644 >> --- a/mm/slub.c >> +++ b/mm/slub.c >> @@ -2187,11 +2187,7 @@ static inline struct slab *alloc_slab_page(gfp_t flags, int node, >> struct slab *slab; >> unsigned int order = oo_order(oo); >> >> - if (node == NUMA_NO_NODE) >> - folio = (struct folio *)alloc_pages(flags, order); >> - else >> - folio = (struct folio *)__alloc_pages_node(node, flags, order); >> - >> + folio = (struct folio *)alloc_pages_node(node, flags, order); >> if (!folio) >> return NULL; >> >> -- >> 2.43.0 >> >>