[Cc Petr for the lockdep part - the patch is http://lkml.kernel.org/r/1552040522-9085-1-git-send-email-penguin-kernel@xxxxxxxxxxxxxxxxxxx] On Fri 08-03-19 20:29:46, Tetsuo Handa wrote: > On 2019/03/08 20:03, Michal Hocko wrote: > > On Fri 08-03-19 19:22:02, Tetsuo Handa wrote: > >> Since we are not allowed to depend on blocking memory allocations when > >> oom_lock is already held, teach lockdep to consider that blocking memory > >> allocations might wait for oom_lock at as early location as possible, and > >> teach lockdep to consider that oom_lock is held by mutex_lock() than by > >> mutex_trylock(). > > > > I do not understand this. It is quite likely that we will have multiple > > allocations hitting this path while somebody else might hold the oom > > lock. > > The thread who succeeded to hold oom_lock must not involve blocking memory > allocations. It is explained in the comment before get_page_from_freelist(). Yes this is correct. > > What kind of problem does this actually want to prevent? Could you be > > more specific please? > > e.g. > > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -3688,6 +3688,7 @@ void warn_alloc(gfp_t gfp_mask, nodemask_t *nodemask, const char *fmt, ...) > * attempt shall not depend on __GFP_DIRECT_RECLAIM && !__GFP_NORETRY > * allocation which will never fail due to oom_lock already held. > */ > + kfree(kmalloc(PAGE_SIZE, GFP_NOIO)); > page = get_page_from_freelist((gfp_mask | __GFP_HARDWALL) & > ~__GFP_DIRECT_RECLAIM, order, > ALLOC_WMARK_HIGH|ALLOC_CPUSET, ac); > > > Since https://lore.kernel.org/lkml/20190308013134.GB4063@jagdpanzerIV/T/#u made me > worry that we might by error introduce such dependency in near future, I propose > this change as a proactive protection. OK, that makes sense to me. I cannot judge the implementation because I am not really familiar with lockdep machinery. Could you explain how it doesn't trigger for all other allocations? Also why it is not sufficient to add the lockdep annotation prior to the trylock in __alloc_pages_may_oom? -- Michal Hocko SUSE Labs