Re: [PATCH bpf-next v2 2/6] mm, bpf: Introduce free_pages_nolock()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2024-12-10 14:49:14 [-0800], Alexei Starovoitov wrote:
> On Tue, Dec 10, 2024 at 12:35 AM Sebastian Andrzej Siewior
> <bigeasy@xxxxxxxxxxxxx> wrote:
> >
> > On 2024-12-09 18:39:32 [-0800], Alexei Starovoitov wrote:
> > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > > index d511e68903c6..a969a62ec0c3 100644
> > > --- a/mm/page_alloc.c
> > > +++ b/mm/page_alloc.c
> > > @@ -1251,9 +1254,33 @@ static void free_one_page(struct zone *zone, struct page *page,
> > >                         unsigned long pfn, unsigned int order,
> > >                         fpi_t fpi_flags)
> > >  {
> > > +     struct llist_head *llhead;
> > >       unsigned long flags;
> > >
> > > -     spin_lock_irqsave(&zone->lock, flags);
> > > +     if (!spin_trylock_irqsave(&zone->lock, flags)) {
> > > +             if (unlikely(fpi_flags & FPI_TRYLOCK)) {
> > > +                     /* Remember the order */
> > > +                     page->order = order;
> > > +                     /* Add the page to the free list */
> > > +                     llist_add(&page->pcp_llist, &zone->trylock_free_pages);
> > > +                     return;
> > > +             }
> > > +             spin_lock_irqsave(&zone->lock, flags);
> > > +     }
> > > +
> > > +     /* The lock succeeded. Process deferred pages. */
> > > +     llhead = &zone->trylock_free_pages;
> > > +     if (unlikely(!llist_empty(llhead))) {
> > > +             struct llist_node *llnode;
> > > +             struct page *p, *tmp;
> > > +
> > > +             llnode = llist_del_all(llhead);
> >
> > Do you really need to turn the list around?
> 
> I didn't think LIFO vs FIFO would make a difference.
> Why spend time rotating it?

I'm sorry. I read llist_reverse_order() in there but it is not there. So
it is all good.

> > > +             llist_for_each_entry_safe(p, tmp, llnode, pcp_llist) {
> > > +                     unsigned int p_order = p->order;
> > > +                     split_large_buddy(zone, p, page_to_pfn(p), p_order, fpi_flags);
> > > +                     __count_vm_events(PGFREE, 1 << p_order);
> > > +             }
> >
> > We had something like that (returning memory in IRQ/ irq-off) in RT tree
> > and we got rid of it before posting the needed bits to mm.
> >
> > If we really intend to do something like this, could we please process
> > this list in an explicitly locked section? I mean not in a try-lock
> > fashion which might have originated in an IRQ-off region on PREEMPT_RT
> > but in an explicit locked section which would remain preemptible. This
> > would also avoid the locking problem down the road when
> > shuffle_pick_tail() invokes get_random_u64() which in turn acquires a
> > spinlock_t.
> 
> I see. So the concern is though spin_lock_irqsave(&zone->lock)
> is sleepable in RT, bpf prog might have been called in the context
> where preemption is disabled and do split_large_buddy() for many
> pages might take too much time?
Yes.

> How about kicking irq_work then? The callback is in kthread in RT.
> We can irq_work_queue() right after llist_add().
> 
> Or we can process only N pages at a time in this loop and
> llist_add() leftover back into zone->trylock_free_pages.

It could be simpler to not process the trylock_free_pages list in the
trylock attempt, only in the lock case which is preemptible.

Sebastian





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux