On 2/6/24 22:50, Kent Overstreet wrote: > Introduce PF_MEMALLOC_* equivalents of some GFP_ flags: > > PF_MEMALLOC_NORECLAIM -> GFP_NOWAIT In an ideal world, this would be nice, but we are in a world with implicit "too small to fail" guarantees that has so far been impossible to get away from [1] for small order GFP_KERNEL allocations, and this scoping would be only safe if no allocations underneath relied on this behavior. But how to ensure that's the case? [1] https://lwn.net/Articles/723317/ > PF_MEMALLOC_NOWARN -> __GFP_NOWARN > > Cc: Vlastimil Babka <vbabka@xxxxxxx> > Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx> > Cc: Michal Hocko <mhocko@xxxxxxxxxx> > Cc: Darrick J. Wong <djwong@xxxxxxxxxx> > Cc: linux-mm@xxxxxxxxx > Signed-off-by: Kent Overstreet <kent.overstreet@xxxxxxxxx> > --- > include/linux/sched.h | 4 ++-- > include/linux/sched/mm.h | 17 +++++++++++++---- > 2 files changed, 15 insertions(+), 6 deletions(-) > > diff --git a/include/linux/sched.h b/include/linux/sched.h > index 292c31697248..ca08d92b20ac 100644 > --- a/include/linux/sched.h > +++ b/include/linux/sched.h > @@ -1755,8 +1755,8 @@ extern struct pid *cad_pid; > * I am cleaning dirty pages from some other bdi. */ > #define PF_KTHREAD 0x00200000 /* I am a kernel thread */ > #define PF_RANDOMIZE 0x00400000 /* Randomize virtual address space */ > -#define PF__HOLE__00800000 0x00800000 > -#define PF__HOLE__01000000 0x01000000 > +#define PF_MEMALLOC_NORECLAIM 0x00800000 /* All allocation requests will inherit __GFP_NOWARN */ > +#define PF_MEMALLOC_NOWARN 0x01000000 /* All allocation requests will inherit __GFP_NOWARN */ > #define PF__HOLE__02000000 0x02000000 > #define PF_NO_SETAFFINITY 0x04000000 /* Userland is not allowed to meddle with cpus_mask */ > #define PF_MCE_EARLY 0x08000000 /* Early kill for mce process policy */ > diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h > index f00d7ecc2adf..c29059a76052 100644 > --- a/include/linux/sched/mm.h > +++ b/include/linux/sched/mm.h > @@ -236,16 +236,25 @@ static inline gfp_t current_gfp_context(gfp_t flags) > { > unsigned int pflags = READ_ONCE(current->flags); > > - if (unlikely(pflags & (PF_MEMALLOC_NOIO | PF_MEMALLOC_NOFS | PF_MEMALLOC_PIN))) { > + if (unlikely(pflags & (PF_MEMALLOC_NOIO | > + PF_MEMALLOC_NOFS | > + PF_MEMALLOC_NORECLAIM | > + PF_MEMALLOC_NOWARN | > + PF_MEMALLOC_PIN))) { > /* > - * NOIO implies both NOIO and NOFS and it is a weaker context > - * so always make sure it makes precedence > + * Stronger flags before weaker flags: > + * NORECLAIM implies NOIO, which in turn implies NOFS > */ > - if (pflags & PF_MEMALLOC_NOIO) > + if (pflags & PF_MEMALLOC_NORECLAIM) > + flags &= ~__GFP_DIRECT_RECLAIM; > + else if (pflags & PF_MEMALLOC_NOIO) > flags &= ~(__GFP_IO | __GFP_FS); > else if (pflags & PF_MEMALLOC_NOFS) > flags &= ~__GFP_FS; > > + if (pflags & PF_MEMALLOC_NOWARN) > + flags |= __GFP_NOWARN; > + > if (pflags & PF_MEMALLOC_PIN) > flags &= ~__GFP_MOVABLE; > }