Re: [PATCH RFC 5/5] non-mm: discourage the usage of __GFP_NOFAIL and encourage GFP_NOFAIL

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jul 29, 2024 at 10:03 PM Vlastimil Babka <vbabka@xxxxxxx> wrote:
>
> On 7/29/24 11:56 AM, Barry Song wrote:
> > On Thu, Jul 25, 2024 at 1:47 PM Barry Song <21cnbao@xxxxxxxxx> wrote:
> >>
> >> On Thu, Jul 25, 2024 at 2:41 AM Christoph Hellwig <hch@xxxxxxxxxxxxx> wrote:
> >> >
> >> > On Wed, Jul 24, 2024 at 04:39:11PM +0200, Vlastimil Babka wrote:
> >> > > On 7/24/24 3:55 PM, Christoph Hellwig wrote:
> >> > > > On Wed, Jul 24, 2024 at 03:47:46PM +0200, Michal Hocko wrote:
> >> > > >> OK, now it makes more sense ;) I have absolutely no objections to
> >> > > >> prefering scoped NO{FS,IO} interfaces of course. And that would indeed
> >> > > >> eliminate a need for defining GFP_NO{FS,IO}_NOFAIL alternatives.
> >> > > >
> >> > > > Yes.  My proposal would be:
> >> > > >
> >> > > > GFP_NOFAIL without any modifiers it the only valid nofail API.
> >> > >
> >> > > Where GFP_NOFAIL is GFP_KERNEL | __GFP_NOFAIL (and not the more limited one
> >> > > as defined in patch 4/5).
> >> >
> >> > Yes.
> >> >
> >> > > > File systems / drivers can combine іt with the scoped nofs/noio if
> >> > > > needed.
> >> > >
> >> > > Sounds good, how quickly we can convert existing __GFP_NOFAIL users remains
> >> > > to be seen...
> >> >
> >> > I took a quick look at the file system ones and they look pretty easy.  I
> >> > think it would be good to a quick scriped run for everything that does
> >> > GFP_KERNEL | __GFP_NOFAIL right now, and then spend a little time on
> >> > the rest.
> >
> > I assume you mean something as the below?
>
> This would work but looks too much like a workaround to fit with the new
> rules without actually fulfiling the purpose of the scopes. I.e. it's
> possible this allocation is in fact part of a larger NOIO scope that should
> be marked accordingly, and not just wrap this single kmalloc.

Absolutely agreed, but the scope needs to be determined on a case-by-case
basis ? The module guys are probably the better people to set the appropriate
scope?  It is difficult to assess this solely from the mm perspective.

>
> > diff --git a/drivers/md/dm-region-hash.c b/drivers/md/dm-region-hash.c
> > index a4550975c27d..b90ef94b1a09 100644
> > --- a/drivers/md/dm-region-hash.c
> > +++ b/drivers/md/dm-region-hash.c
> > @@ -291,10 +291,13 @@ static void __rh_insert(struct dm_region_hash
> > *rh, struct dm_region *reg)
> >  static struct dm_region *__rh_alloc(struct dm_region_hash *rh, region_t region)
> >  {
> >         struct dm_region *reg, *nreg;
> > +       int orig_flags;
> >
> >         nreg = mempool_alloc(&rh->region_pool, GFP_ATOMIC);
> > +       orig_flags = memalloc_noio_save();
> >         if (unlikely(!nreg))
> > -               nreg = kmalloc(sizeof(*nreg), GFP_NOIO | __GFP_NOFAIL);
> > +               nreg = kmalloc(sizeof(*nreg), GFP_NOFAIL);
> > +       memalloc_noio_restore(orig_flags);
> >
> >         nreg->state = rh->log->type->in_sync(rh->log, region, 1) ?
> >                       DM_RH_CLEAN : DM_RH_NOSYNC;
>

Thanks
Barry





[Index of Archives]     [KVM Development]     [Libvirt Development]     [Libvirt Users]     [CentOS Virtualization]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux