Re: [PATCH bpf-next v5 1/7] mm, bpf: Introduce try_alloc_pages() for opportunistic page allocation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jan 15, 2025 at 3:47 PM Shakeel Butt <shakeel.butt@xxxxxxxxx> wrote:
>
> On Wed, Jan 15, 2025 at 03:00:08PM -0800, Alexei Starovoitov wrote:
> > On Wed, Jan 15, 2025 at 3:19 AM Vlastimil Babka <vbabka@xxxxxxx> wrote:
> > >
> > >
>
> Sorry missed your response here.
>
> > > What about set_page_owner() from post_alloc_hook() and it's stackdepot
> > > saving. I guess not an issue until try_alloc_pages() gets used later, so
> > > just a mental note that it has to be resolved before. Or is it actually safe?
> >
> > set_page_owner() should be fine.
> > save_stack() has in_page_owner recursion protection mechanism.
> >
> > stack_depot_save_flags() may be problematic if there is another
> > path to it.
> > I guess I can do:
> >
> > diff --git a/lib/stackdepot.c b/lib/stackdepot.c
> > index 245d5b416699..61772bc4b811 100644
> > --- a/lib/stackdepot.c
> > +++ b/lib/stackdepot.c
> > @@ -630,7 +630,7 @@ depot_stack_handle_t
> > stack_depot_save_flags(unsigned long *entries,
> >                         prealloc = page_address(page);
> >         }
>
> There is alloc_pages(gfp_nested_mask(alloc_flags)...) just couple of
> lines above. How about setting can_alloc false along with the below
> change for this case?

argh. Just noticed this code path:
free_pages_prepare
  __reset_page_owner
    save_stack(GFP_NOWAIT | __GFP_NOWARN);
       stack_depot_save(entries, nr_entries, flags);
          stack_depot_save_flags(entries, nr_entries, alloc_flags,
                                      STACK_DEPOT_FLAG_CAN_ALLOC);

bool can_alloc = depot_flags & STACK_DEPOT_FLAG_CAN_ALLOC;

if (unlikely(can_alloc && !READ_ONCE(new_pool))) {
         page = alloc_pages(gfp_nested_mask(alloc_flags),
                                 DEPOT_POOL_ORDER);

> Or we can set ALLOC_TRYLOCK in core alloc_pages()
> for !gfpflags_allow_spinning().

so gfpflags_allow_spinning() approach doesn't work out of the door,
since free_pages don't have gfp flags.
Passing FPI flags everywhere is too much churn.

I guess using the same gfp flags as try_alloc_pages()
in __reset_page_owner() should work.
That will make gfpflags_allow_spinning()==true in stack_depot.

In an earlier email I convinced myself that
current->in_page_owner recursion protection in save_stack()
is enough, but looks like it's not.
We could be hitting tracepoint somewhere inside alloc_pages() then
alloc_pages -> tracepoint -> bpf
-> free_pages_nolock -> __free_unref_page -> free_pages_prepare
-> save_stack(GFP_NOWAIT | __GFP_NOWARN);

and this save_stack will be called for the first time.
It's not recursing into itself. Then:

-> stack_depot_save_flags -> alloc_pages(GFP_NOWAIT) -> boom.

So looks like __reset_page_owner() has to use
!gfpflags_allow_spinning() flags and
stack_depot_save_flags() has
to do:
if (!gfpflags_allow_spinning(alloc_flags))
   can_alloc = false;
   raw_spin_trylock_irqsave(&pool_lock, ..)

Will code this up. Unless there are better ideas.





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux