The vmalloc() code uses vmalloc_sync_all() to synchronize changes to the global reference kernel PGD to task PGDs in certain rare cases, like register_die_notifier(). This use seems to be somewhat questionable, as most other vmalloc page table fixups are vmalloc_fault() driven, but nevertheless it's there and it's using the pgd_list. But we don't need the global list, as we can walk the task list under RCU. Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> Cc: Andy Lutomirski <luto@xxxxxxxxxxxxxx> Cc: Borislav Petkov <bp@xxxxxxxxx> Cc: Brian Gerst <brgerst@xxxxxxxxx> Cc: Denys Vlasenko <dvlasenk@xxxxxxxxxx> Cc: H. Peter Anvin <hpa@xxxxxxxxx> Cc: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> Cc: Oleg Nesterov <oleg@xxxxxxxxxx> Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx> Cc: Rik van Riel <riel@xxxxxxxxxx> Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx> Cc: Waiman Long <Waiman.Long@xxxxxx> Cc: linux-mm@xxxxxxxxx Signed-off-by: Ingo Molnar <mingo@xxxxxxxxxx> --- arch/x86/mm/fault.c | 29 ++++++++++++++++++++++------- 1 file changed, 22 insertions(+), 7 deletions(-) diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index f890f5463ac1..9322d5ad3811 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -14,6 +14,7 @@ #include <linux/prefetch.h> /* prefetchw */ #include <linux/context_tracking.h> /* exception_enter(), ... */ #include <linux/uaccess.h> /* faulthandler_disabled() */ +#include <linux/oom.h> /* find_lock_task_mm(), ... */ #include <asm/traps.h> /* dotraplinkage, ... */ #include <asm/pgalloc.h> /* pgd_*(), ... */ @@ -237,24 +238,38 @@ void vmalloc_sync_all(void) for (address = VMALLOC_START & PMD_MASK; address >= TASK_SIZE && address < FIXADDR_TOP; address += PMD_SIZE) { - struct page *page; + struct task_struct *g; + + rcu_read_lock(); /* Task list walk */ spin_lock(&pgd_lock); - list_for_each_entry(page, &pgd_list, lru) { + + for_each_process(g) { + struct task_struct *p; + struct mm_struct *mm; spinlock_t *pgt_lock; - pmd_t *ret; + pmd_t *pmd_ret; + + p = find_lock_task_mm(g); + if (!p) + continue; - /* the pgt_lock only for Xen */ - pgt_lock = &pgd_page_get_mm(page)->page_table_lock; + mm = p->mm; + /* The pgt_lock is only used on Xen: */ + pgt_lock = &mm->page_table_lock; spin_lock(pgt_lock); - ret = vmalloc_sync_one(page_address(page), address); + pmd_ret = vmalloc_sync_one(mm->pgd, address); spin_unlock(pgt_lock); - if (!ret) + task_unlock(p); + + if (!pmd_ret) break; } + spin_unlock(&pgd_lock); + rcu_read_unlock(); } } -- 2.1.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>