From: Pavel Tatashin <pasha.tatashin@xxxxxxxxxx> Date: Wed, 31 May 2017 11:25:24 -0400 > + for_each_online_cpu(cpu) { > + /* > + * If a new mm is stored after we took this mm from the array, > + * it will go into get_new_mmu_context() path, because we > + * already bumped the version in tlb_context_cache. > + */ > + mm = per_cpu(per_cpu_secondary_mm, cpu); > + > + if (unlikely(!mm || mm == &init_mm)) > + continue; > + > + old_ctx = mm->context.sparc64_ctx_val; > + if (likely((old_ctx & CTX_VERSION_MASK) == old_ver)) { > + new_ctx = (old_ctx & ~CTX_VERSION_MASK) | new_ver; > + set_bit(new_ctx & CTX_NR_MASK, mmu_context_bmap); > + mm->context.sparc64_ctx_val = new_ctx; I wonder if there is a potential use after free here. What synchronizes the per-cpu mm pointers with free_mm()? For example, what stops another cpu from exiting a thread and dropping the mm between when you do the per_cpu() read of the 'mm' pointer and the tests and sets you do a few lines later? -- To unsubscribe from this list: send the line "unsubscribe sparclinux" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html