On Fri 15-12-17 15:04:29, Andrew Morton wrote: > On Thu, 14 Dec 2017 13:30:56 -0800 (PST) David Rientjes <rientjes@xxxxxxxxxx> wrote: > > > Commit 4d4bbd8526a8 ("mm, oom_reaper: skip mm structs with mmu notifiers") > > prevented the oom reaper from unmapping private anonymous memory with the > > oom reaper when the oom victim mm had mmu notifiers registered. > > > > The rationale is that doing mmu_notifier_invalidate_range_{start,end}() > > around the unmap_page_range(), which is needed, can block and the oom > > killer will stall forever waiting for the victim to exit, which may not > > be possible without reaping. > > > > That concern is real, but only true for mmu notifiers that have blockable > > invalidate_range_{start,end}() callbacks. This patch adds a "flags" field > > to mmu notifier ops that can set a bit to indicate that these callbacks do > > not block. > > > > The implementation is steered toward an expensive slowpath, such as after > > the oom reaper has grabbed mm->mmap_sem of a still alive oom victim. > > some tweakage, please review. > > From: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> > Subject: mm-mmu_notifier-annotate-mmu-notifiers-with-blockable-invalidate-callbacks-fix > > make mm_has_blockable_invalidate_notifiers() return bool, use rwsem_is_locked() Yes, that makes sense to me. > > Cc: Alex Deucher <alexander.deucher@xxxxxxx> > Cc: Andrea Arcangeli <aarcange@xxxxxxxxxx> > Cc: Benjamin Herrenschmidt <benh@xxxxxxxxxxxxxxxxxxx> > Cc: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx> > Cc: Christian König <christian.koenig@xxxxxxx> > Cc: David Airlie <airlied@xxxxxxxx> > Cc: David Rientjes <rientjes@xxxxxxxxxx> > Cc: Dimitri Sivanich <sivanich@xxxxxxx> > Cc: Doug Ledford <dledford@xxxxxxxxxx> > Cc: Jani Nikula <jani.nikula@xxxxxxxxxxxxxxx> > Cc: Jérôme Glisse <jglisse@xxxxxxxxxx> > Cc: Joerg Roedel <joro@xxxxxxxxxx> > Cc: Michal Hocko <mhocko@xxxxxxxx> > Cc: Mike Marciniszyn <mike.marciniszyn@xxxxxxxxx> > Cc: Oded Gabbay <oded.gabbay@xxxxxxxxx> > Cc: Paolo Bonzini <pbonzini@xxxxxxxxxx> > Cc: Paul Mackerras <paulus@xxxxxxxxx> > Cc: Radim Krčmář <rkrcmar@xxxxxxxxxx> > Cc: Sean Hefty <sean.hefty@xxxxxxxxx> > Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> > --- > > include/linux/mmu_notifier.h | 7 ++++--- > mm/mmu_notifier.c | 8 ++++---- > 2 files changed, 8 insertions(+), 7 deletions(-) > > diff -puN include/linux/mmu_notifier.h~mm-mmu_notifier-annotate-mmu-notifiers-with-blockable-invalidate-callbacks-fix include/linux/mmu_notifier.h > --- a/include/linux/mmu_notifier.h~mm-mmu_notifier-annotate-mmu-notifiers-with-blockable-invalidate-callbacks-fix > +++ a/include/linux/mmu_notifier.h > @@ -2,6 +2,7 @@ > #ifndef _LINUX_MMU_NOTIFIER_H > #define _LINUX_MMU_NOTIFIER_H > > +#include <linux/types.h> > #include <linux/list.h> > #include <linux/spinlock.h> > #include <linux/mm_types.h> > @@ -233,7 +234,7 @@ extern void __mmu_notifier_invalidate_ra > bool only_end); > extern void __mmu_notifier_invalidate_range(struct mm_struct *mm, > unsigned long start, unsigned long end); > -extern int mm_has_blockable_invalidate_notifiers(struct mm_struct *mm); > +extern bool mm_has_blockable_invalidate_notifiers(struct mm_struct *mm); > > static inline void mmu_notifier_release(struct mm_struct *mm) > { > @@ -473,9 +474,9 @@ static inline void mmu_notifier_invalida > { > } > > -static inline int mm_has_blockable_invalidate_notifiers(struct mm_struct *mm) > +static inline bool mm_has_blockable_invalidate_notifiers(struct mm_struct *mm) > { > - return 0; > + return false; > } > > static inline void mmu_notifier_mm_init(struct mm_struct *mm) > diff -puN mm/mmu_notifier.c~mm-mmu_notifier-annotate-mmu-notifiers-with-blockable-invalidate-callbacks-fix mm/mmu_notifier.c > --- a/mm/mmu_notifier.c~mm-mmu_notifier-annotate-mmu-notifiers-with-blockable-invalidate-callbacks-fix > +++ a/mm/mmu_notifier.c > @@ -240,13 +240,13 @@ EXPORT_SYMBOL_GPL(__mmu_notifier_invalid > * Must be called while holding mm->mmap_sem for either read or write. > * The result is guaranteed to be valid until mm->mmap_sem is dropped. > */ > -int mm_has_blockable_invalidate_notifiers(struct mm_struct *mm) > +bool mm_has_blockable_invalidate_notifiers(struct mm_struct *mm) > { > struct mmu_notifier *mn; > int id; > - int ret = 0; > + bool ret = false; > > - WARN_ON_ONCE(down_write_trylock(&mm->mmap_sem)); > + WARN_ON_ONCE(!rwsem_is_locked(&mm->mmap_sem)); > > if (!mm_has_notifiers(mm)) > return ret; > @@ -259,7 +259,7 @@ int mm_has_blockable_invalidate_notifier > continue; > > if (!(mn->ops->flags & MMU_INVALIDATE_DOES_NOT_BLOCK)) { > - ret = 1; > + ret = true; > break; > } > } > _ > > -- > To unsubscribe, send a message with 'unsubscribe linux-mm' in > the body to majordomo@xxxxxxxxx. For more info on Linux MM, > see: http://www.linux-mm.org/ . > Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a> -- Michal Hocko SUSE Labs -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>