On Wed, Jan 15, 2025 at 03:00:08PM -0800, Alexei Starovoitov wrote: > On Wed, Jan 15, 2025 at 3:19 AM Vlastimil Babka <vbabka@xxxxxxx> wrote: > > > > Sorry missed your response here. > > What about set_page_owner() from post_alloc_hook() and it's stackdepot > > saving. I guess not an issue until try_alloc_pages() gets used later, so > > just a mental note that it has to be resolved before. Or is it actually safe? > > set_page_owner() should be fine. > save_stack() has in_page_owner recursion protection mechanism. > > stack_depot_save_flags() may be problematic if there is another > path to it. > I guess I can do: > > diff --git a/lib/stackdepot.c b/lib/stackdepot.c > index 245d5b416699..61772bc4b811 100644 > --- a/lib/stackdepot.c > +++ b/lib/stackdepot.c > @@ -630,7 +630,7 @@ depot_stack_handle_t > stack_depot_save_flags(unsigned long *entries, > prealloc = page_address(page); > } There is alloc_pages(gfp_nested_mask(alloc_flags)...) just couple of lines above. How about setting can_alloc false along with the below change for this case? Or we can set ALLOC_TRYLOCK in core alloc_pages() for !gfpflags_allow_spinning(). > > - if (in_nmi()) { > + if (in_nmi() || !gfpflags_allow_spinning(alloc_flags)) { > /* We can never allocate in NMI context. */ > WARN_ON_ONCE(can_alloc); > /* Best effort; bail if we fail to take the lock. */ > if (!raw_spin_trylock_irqsave(&pool_lock, flags)) > goto exit; > > as part of this patch, > but not convinced whether it's necessary. > stack_depot* is effectively noinstr. > kprobe-bpf cannot be placed in there and afaict > it doesn't call any tracepoints. > So in_nmi() is the only way to reenter and that's already covered. Are the locks in stack_depot* only the issue for bpf programs triggered inside stack_depot*?