On Tue, Nov 08, 2022 at 07:54:35PM -0800, Andy Lutomirski wrote: > On 11/7/22 13:35, Kirill A. Shutemov wrote: > > Linear Address Masking mode for userspace pointers encoded in CR3 bits. > > The mode is selected per-process and stored in mm_context_t. > > > > switch_mm_irqs_off() now respects selected LAM mode and constructs CR3 > > accordingly. > > > > The active LAM mode gets recorded in the tlb_state. > > > > > +static inline unsigned long mm_lam_cr3_mask(struct mm_struct *mm) > > +{ > > + return mm->context.lam_cr3_mask; > > READ_ONCE -- otherwise this has a data race and might generate sanitizer > complaints. Yep, thanks for pointing it out. > > +} > > > @@ -491,6 +496,8 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next, > > { > > struct mm_struct *real_prev = this_cpu_read(cpu_tlbstate.loaded_mm); > > u16 prev_asid = this_cpu_read(cpu_tlbstate.loaded_mm_asid); > > + unsigned long prev_lam = tlbstate_lam_cr3_mask(); > > + unsigned long new_lam = mm_lam_cr3_mask(next); > > So I'm reading this again after drinking a cup of coffee. new_lam is next's > LAM mask according to mm_struct (and thus can change asynchronously due to a > remote CPU). prev_lam is based on tlbstate and can't change asynchronously, > at least not with IRQs off. > > > > bool was_lazy = this_cpu_read(cpu_tlbstate_shared.is_lazy); > > unsigned cpu = smp_processor_id(); > > u64 next_tlb_gen; > > @@ -520,7 +527,7 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next, > > * isn't free. > > */ > > #ifdef CONFIG_DEBUG_VM > > - if (WARN_ON_ONCE(__read_cr3() != build_cr3(real_prev->pgd, prev_asid))) { > > + if (WARN_ON_ONCE(__read_cr3() != build_cr3(real_prev->pgd, prev_asid, prev_lam))) { > > So is the only purpose of tlbstate_lam_cr3_mask() to enable this warning to > work? Right. And disabling CONFIG_DEBUG_VM leads to warning. See the fixup below. > > /* > > * If we were to BUG here, we'd be very likely to kill > > * the system so hard that we don't see the call trace. > > @@ -552,9 +559,15 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next, > > * instruction. > > */ > > if (real_prev == next) { > > + /* Not actually switching mm's */ > > VM_WARN_ON(this_cpu_read(cpu_tlbstate.ctxs[prev_asid].ctx_id) != > > next->context.ctx_id); > > + /* > > + * If this races with another thread that enables lam, 'new_lam' > > + * might not match 'prev_lam'. > > + */ > > + > > Indeed. > > > /* > > * Even in lazy TLB mode, the CPU should stay set in the > > * mm_cpumask. The TLB shootdown code can figure out from > > @@ -622,15 +635,16 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next, > > barrier(); > > } > > > @@ -691,6 +705,10 @@ void initialize_tlbstate_and_flush(void) > > /* Assert that CR3 already references the right mm. */ > > WARN_ON((cr3 & CR3_ADDR_MASK) != __pa(mm->pgd)); > > + /* LAM expected to be disabled in CR3 and init_mm */ > > + WARN_ON(cr3 & (X86_CR3_LAM_U48 | X86_CR3_LAM_U57)); > > + WARN_ON(mm_lam_cr3_mask(&init_mm)); > > + > > I think the callers all have init_mm selected, but the rest of this function > is not really written with this assumption. (But it does force ASID 0, > which is at least a bizarre thing to do for non-init-mm.) Hm. It uses tlb_gen of init_mm, so I assumed &init_mm == mm, but yeah it is not strictly correct. > What's the purpose of this warning? I'm okay with keeping it, but maybe > also add a warning that fires if mm != &init_mm. Just to make sure we are in sane state. I can drop init_mm reference if it helps. The fixup based on your feedback: diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h index 1ab7ecf61659..6f5b58a5f951 100644 --- a/arch/x86/include/asm/mmu_context.h +++ b/arch/x86/include/asm/mmu_context.h @@ -94,7 +94,7 @@ static inline void switch_ldt(struct mm_struct *prev, struct mm_struct *next) #ifdef CONFIG_X86_64 static inline unsigned long mm_lam_cr3_mask(struct mm_struct *mm) { - return mm->context.lam_cr3_mask; + return READ_ONCE(mm->context.lam_cr3_mask); } static inline void dup_lam(struct mm_struct *oldmm, struct mm_struct *mm) diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 4380776b3c61..ab66a48f38ce 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -496,7 +496,6 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next, { struct mm_struct *real_prev = this_cpu_read(cpu_tlbstate.loaded_mm); u16 prev_asid = this_cpu_read(cpu_tlbstate.loaded_mm_asid); - unsigned long prev_lam = tlbstate_lam_cr3_mask(); unsigned long new_lam = mm_lam_cr3_mask(next); bool was_lazy = this_cpu_read(cpu_tlbstate_shared.is_lazy); unsigned cpu = smp_processor_id(); @@ -527,7 +526,8 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next, * isn't free. */ #ifdef CONFIG_DEBUG_VM - if (WARN_ON_ONCE(__read_cr3() != build_cr3(real_prev->pgd, prev_asid, prev_lam))) { + if (WARN_ON_ONCE(__read_cr3() != build_cr3(real_prev->pgd, prev_asid, + tlbstate_lam_cr3_mask()))) { /* * If we were to BUG here, we'd be very likely to kill * the system so hard that we don't see the call trace. @@ -565,7 +565,7 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next, /* * If this races with another thread that enables lam, 'new_lam' - * might not match 'prev_lam'. + * might not match tlbstate_lam_cr3_mask(). */ /* @@ -705,9 +705,9 @@ void initialize_tlbstate_and_flush(void) /* Assert that CR3 already references the right mm. */ WARN_ON((cr3 & CR3_ADDR_MASK) != __pa(mm->pgd)); - /* LAM expected to be disabled in CR3 and init_mm */ + /* LAM expected to be disabled */ WARN_ON(cr3 & (X86_CR3_LAM_U48 | X86_CR3_LAM_U57)); - WARN_ON(mm_lam_cr3_mask(&init_mm)); + WARN_ON(mm_lam_cr3_mask(mm)); /* * Assert that CR4.PCIDE is set if needed. (CR4.PCIDE initialization -- Kiryl Shutsemau / Kirill A. Shutemov