3.2.61-rc1 review patch. If anyone has any objections, please let me know. ------------------ From: Hugh Dickins <hughd@xxxxxxxxxx> commit 7f39dda9d86fb4f4f17af0de170decf125726f8c upstream. Trinity reports BUG: sleeping function called from invalid context at kernel/locking/rwsem.c:47 in_atomic(): 0, irqs_disabled(): 0, pid: 5787, name: trinity-c27 __might_sleep < down_write < __put_anon_vma < page_get_anon_vma < migrate_pages < compact_zone < compact_zone_order < try_to_compact_pages .. Right, since conversion to mutex then rwsem, we should not put_anon_vma() from inside an rcu_read_lock()ed section: fix the two places that did so. And add might_sleep() to anon_vma_free(), as suggested by Peter Zijlstra. Fixes: 88c22088bf23 ("mm: optimize page_lock_anon_vma() fast-path") Reported-by: Dave Jones <davej@xxxxxxxxxx> Signed-off-by: Hugh Dickins <hughd@xxxxxxxxxx> Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> Signed-off-by: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> [bwh: Backported to 3.2: adjust context] Signed-off-by: Ben Hutchings <ben@xxxxxxxxxxxxxxx> --- mm/rmap.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) --- a/mm/rmap.c +++ b/mm/rmap.c @@ -103,6 +103,7 @@ static inline void anon_vma_free(struct * LOCK should suffice since the actual taking of the lock must * happen _before_ what follows. */ + might_sleep(); if (mutex_is_locked(&anon_vma->root->mutex)) { anon_vma_lock(anon_vma); anon_vma_unlock(anon_vma); @@ -434,8 +435,9 @@ struct anon_vma *page_get_anon_vma(struc * above cannot corrupt). */ if (!page_mapped(page)) { + rcu_read_unlock(); put_anon_vma(anon_vma); - anon_vma = NULL; + return NULL; } out: rcu_read_unlock(); @@ -485,9 +487,9 @@ struct anon_vma *page_lock_anon_vma(stru } if (!page_mapped(page)) { + rcu_read_unlock(); put_anon_vma(anon_vma); - anon_vma = NULL; - goto out; + return NULL; } /* we pinned the anon_vma, its safe to sleep */ -- To unsubscribe from this list: send the line "unsubscribe stable" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html