On Mon, Jun 26, 2023 at 11:14 AM Ryan Roberts <ryan.roberts@xxxxxxx> wrote: > > Opportunistically attempt to allocate high-order folios in highmem, > optionally zeroed. Retry with lower orders all the way to order-0, until > success. Although, of note, order-1 allocations are skipped since a > large folio must be at least order-2 to work with the THP machinery. The > user must check what they got with folio_order(). > > This will be used to oportunistically allocate large folios for > anonymous memory with a sensible fallback under memory pressure. > > For attempts to allocate non-0 orders, we set __GFP_NORETRY to prevent > high latency due to reclaim, instead preferring to just try for a lower > order. The same approach is used by the readahead code when allocating > large folios. > > Signed-off-by: Ryan Roberts <ryan.roberts@xxxxxxx> > --- > mm/memory.c | 33 +++++++++++++++++++++++++++++++++ > 1 file changed, 33 insertions(+) > > diff --git a/mm/memory.c b/mm/memory.c > index 367bbbb29d91..53896d46e686 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -3001,6 +3001,39 @@ static vm_fault_t fault_dirty_shared_page(struct vm_fault *vmf) > return 0; > } > > +static inline struct folio *vma_alloc_movable_folio(struct vm_area_struct *vma, > + unsigned long vaddr, int order, bool zeroed) > +{ > + gfp_t gfp = order > 0 ? __GFP_NORETRY | __GFP_NOWARN : 0; > + > + if (zeroed) > + return vma_alloc_zeroed_movable_folio(vma, vaddr, gfp, order); > + else > + return vma_alloc_folio(GFP_HIGHUSER_MOVABLE | gfp, order, vma, > + vaddr, false); > +} > + > +/* > + * Opportunistically attempt to allocate high-order folios, retrying with lower > + * orders all the way to order-0, until success. order-1 allocations are skipped > + * since a folio must be at least order-2 to work with the THP machinery. The > + * user must check what they got with folio_order(). vaddr can be any virtual > + * address that will be mapped by the allocated folio. > + */ > +static struct folio *try_vma_alloc_movable_folio(struct vm_area_struct *vma, > + unsigned long vaddr, int order, bool zeroed) > +{ > + struct folio *folio; > + > + for (; order > 1; order--) { > + folio = vma_alloc_movable_folio(vma, vaddr, order, zeroed); > + if (folio) > + return folio; > + } > + > + return vma_alloc_movable_folio(vma, vaddr, 0, zeroed); > +} I'd drop this patch. Instead, in do_anonymous_page(): if (IS_ENABLED(CONFIG_ARCH_WANTS_PTE_ORDER)) folio = vma_alloc_zeroed_movable_folio(vma, addr, CONFIG_ARCH_WANTS_PTE_ORDER)) if (!folio) folio = vma_alloc_zeroed_movable_folio(vma, addr, 0);