On Wed, Aug 19, 2020 at 11:08 AM Matthew Wilcox <willy@xxxxxxxxxxxxx> wrote: > > On Wed, Aug 19, 2020 at 10:24:24AM +0800, Yafang Shao wrote: > > From: "Matthew Wilcox (Oracle)" <willy@xxxxxxxxxxxxx> > > > > Since XFS needs to pretend to be kswapd in some of its worker threads, > > create methods to save & restore kswapd state. Don't bother restoring > > kswapd state in kswapd -- the only time we reach this code is when we're > > exiting and the task_struct is about to be destroyed anyway. > > > > Cc: Dave Chinner <david@xxxxxxxxxxxxx> > > Cc: Christoph Hellwig <hch@xxxxxxxxxxxxx> > > Cc: Michal Hocko <mhocko@xxxxxxxxxx> > > Cc: Darrick J. Wong <darrick.wong@xxxxxxxxxx> > > Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx> > > Signed-off-by: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> > > Signed-off-by: Yafang Shao <laoar.shao@xxxxxxxxx> > > See https://lore.kernel.org/linux-mm/20200625123143.GK1320@xxxxxxxxxxxxxx/ > > Please add: > > Acked-by: Michal Hocko <mhocko@xxxxxxxx> > Sure. I missed that discussion. > > +/* > > + * Tell the memory management that we're a "memory allocator", > > + * and that if we need more memory we should get access to it > > + * regardless (see "__alloc_pages()"). "kswapd" should > > + * never get caught in the normal page freeing logic. > > + * > > + * (Kswapd normally doesn't need memory anyway, but sometimes > > + * you need a small amount of memory in order to be able to > > + * page out something else, and this flag essentially protects > > + * us from recursively trying to free more memory as we're > > + * trying to free the first piece of memory in the first place). > > + */ > > And let's change that comment as suggested by Michal (slightly edited > by me): > > /* > * Tell the memory management code that this thread is working on behalf > * of background memory reclaim (like kswapd). That means that it will > * get access to memory reserves should it need to allocate memory in > * order to make forward progress. With this great power comes great > * responsibility to not exhaust those reserves. > */ > I will update it with that comment. > > +#define KSWAPD_PF_FLAGS (PF_MEMALLOC | PF_SWAPWRITE | PF_KSWAPD) > > + > > +static inline unsigned long become_kswapd(void) > > +{ > > + unsigned long flags = current->flags & KSWAPD_PF_FLAGS; > > + > > + current->flags |= KSWAPD_PF_FLAGS; > > + > > + return flags; > > +} > > + > > +static inline void restore_kswapd(unsigned long flags) > > +{ > > + current->flags &= ~(flags ^ KSWAPD_PF_FLAGS); > > +} > > + > > #ifdef CONFIG_MEMCG > > /** > > * memalloc_use_memcg - Starts the remote memcg charging scope. > > diff --git a/mm/vmscan.c b/mm/vmscan.c > > index 99e1796eb833..3a2615bfde35 100644 > > --- a/mm/vmscan.c > > +++ b/mm/vmscan.c > > @@ -3859,19 +3859,7 @@ static int kswapd(void *p) > > if (!cpumask_empty(cpumask)) > > set_cpus_allowed_ptr(tsk, cpumask); > > > > - /* > > - * Tell the memory management that we're a "memory allocator", > > - * and that if we need more memory we should get access to it > > - * regardless (see "__alloc_pages()"). "kswapd" should > > - * never get caught in the normal page freeing logic. > > - * > > - * (Kswapd normally doesn't need memory anyway, but sometimes > > - * you need a small amount of memory in order to be able to > > - * page out something else, and this flag essentially protects > > - * us from recursively trying to free more memory as we're > > - * trying to free the first piece of memory in the first place). > > - */ > > - tsk->flags |= PF_MEMALLOC | PF_SWAPWRITE | PF_KSWAPD; > > + become_kswapd(); > > set_freezable(); > > > > WRITE_ONCE(pgdat->kswapd_order, 0); > > @@ -3921,8 +3909,6 @@ static int kswapd(void *p) > > goto kswapd_try_sleep; > > } > > > > - tsk->flags &= ~(PF_MEMALLOC | PF_SWAPWRITE | PF_KSWAPD); > > - > > return 0; > > } > > > > -- > > 2.18.1 > > -- Thanks Yafang