Re: [PATCH bpf-next v3 4/6] memcg: Use trylock to access memcg stock_lock.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu 19-12-24 16:39:43, Alexei Starovoitov wrote:
> On Wed, Dec 18, 2024 at 11:52 PM Michal Hocko <mhocko@xxxxxxxx> wrote:
> >
> > On Thu 19-12-24 08:27:06, Michal Hocko wrote:
> > > On Thu 19-12-24 08:08:44, Michal Hocko wrote:
> > > > All that being said, the message I wanted to get through is that atomic
> > > > (NOWAIT) charges could be trully reentrant if the stock local lock uses
> > > > trylock. We do not need a dedicated gfp flag for that now.
> > >
> > > And I want to add. Not only we can achieve that, I also think this is
> > > desirable because for !RT this will be no functional change and for RT
> > > it makes more sense to simply do deterministic (albeit more costly
> > > page_counter update) than spin over a lock to use the batch (or learn
> > > the batch cannot be used).
> >
> > So effectively this on top of yours
> > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > index f168d223375f..29a831f6109c 100644
> > --- a/mm/memcontrol.c
> > +++ b/mm/memcontrol.c
> > @@ -1768,7 +1768,7 @@ static bool consume_stock(struct mem_cgroup *memcg, unsigned int nr_pages,
> >                 return ret;
> >
> >         if (!local_trylock_irqsave(&memcg_stock.stock_lock, flags)) {
> > -               if (gfp_mask & __GFP_TRYLOCK)
> > +               if (!gfpflags_allow_blockingk(gfp_mask))
> >                         return ret;
> >                 local_lock_irqsave(&memcg_stock.stock_lock, flags);
> 
> I don't quite understand such a strong desire to avoid the new GFP flag
> especially when it's in mm/internal.h. There are lots of bits left.
> It's not like PF_* flags that are limited, but fine
> let's try to avoid GFP_TRYLOCK_BIT.

Because historically this has proven to be a bad idea that usually
backfires.  As I've said in other email I do care much less now that
this is mostly internal (one can still do that but would need to try
hard). But still if we _can_ avoid it and it makes the code generally
_sensible_ then let's not introduce a new flag.

[...]
> How about the following:
> 
> diff --git a/include/linux/gfp.h b/include/linux/gfp.h
> index ff9060af6295..f06131d5234f 100644
> --- a/include/linux/gfp.h
> +++ b/include/linux/gfp.h
> @@ -39,6 +39,17 @@ static inline bool gfpflags_allow_blocking(const
> gfp_t gfp_flags)
>         return !!(gfp_flags & __GFP_DIRECT_RECLAIM);
>  }
> 
> +static inline bool gfpflags_allow_spinning(const gfp_t gfp_flags)
> +{
> +       /*
> +        * !__GFP_DIRECT_RECLAIM -> direct claim is not allowed.
> +        * !__GFP_KSWAPD_RECLAIM -> it's not safe to wake up kswapd.
> +        * All GFP_* flags including GFP_NOWAIT use one or both flags.
> +        * try_alloc_pages() is the only API that doesn't specify either flag.

I wouldn't be surprised if we had other allocations like that. git grep
is generally not very helpful as many/most allocations use gfp argument
of a sort. I would slightly reword this to be more explicit.
	  /*
	   * This is stronger than GFP_NOWAIT or GFP_ATOMIC because
	   * those are guaranteed to never block on a sleeping lock.
	   * Here we are enforcing that the allaaction doesn't ever spin
	   * on any locks (i.e. only trylocks). There is no highlevel
	   * GFP_$FOO flag for this use try_alloc_pages as the
	   * regular page allocator doesn't fully support this
	   * allocation mode.
> +        */
> +       return !(gfp_flags & __GFP_RECLAIM);
> +}
> +
>  #ifdef CONFIG_HIGHMEM
>  #define OPT_ZONE_HIGHMEM ZONE_HIGHMEM
>  #else
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index f168d223375f..545d345c22de 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -1768,7 +1768,7 @@ static bool consume_stock(struct mem_cgroup
> *memcg, unsigned int nr_pages,
>                 return ret;
> 
>         if (!local_trylock_irqsave(&memcg_stock.stock_lock, flags)) {
> -               if (gfp_mask & __GFP_TRYLOCK)
> +               if (!gfpflags_allow_spinning(gfp_mask))
>                         return ret;
>                 local_lock_irqsave(&memcg_stock.stock_lock, flags);
>         }
> 
> If that's acceptable then such an approach will work for
> my slub.c reentrance changes too.

It certainly is acceptable for me. Do not forget to add another hunk to
avoid charging the full batch in this case.

Thanks!
-- 
Michal Hocko
SUSE Labs




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux