Re: [PATCHv3 4/8] x86/mm: Handle LAM on context switch

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 6/10/22 07:35, Kirill A. Shutemov wrote:
Linear Address Masking mode for userspace pointers encoded in CR3 bits.
The mode is selected per-thread. Add new thread features indicate that the
thread has Linear Address Masking enabled.

switch_mm_irqs_off() now respects these flags and constructs CR3
accordingly.

The active LAM mode gets recorded in the tlb_state.

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx>
---
  arch/x86/include/asm/mmu.h         |  1 +
  arch/x86/include/asm/mmu_context.h | 24 ++++++++++++
  arch/x86/include/asm/tlbflush.h    |  3 ++
  arch/x86/mm/tlb.c                  | 62 ++++++++++++++++++++++--------
  4 files changed, 75 insertions(+), 15 deletions(-)

diff --git a/arch/x86/include/asm/mmu.h b/arch/x86/include/asm/mmu.h
index 5d7494631ea9..d150e92163b6 100644
--- a/arch/x86/include/asm/mmu.h
+++ b/arch/x86/include/asm/mmu.h
@@ -40,6 +40,7 @@ typedef struct {
#ifdef CONFIG_X86_64
  	unsigned short flags;
+	u64 lam_cr3_mask;
  #endif
struct mutex lock;
diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h
index b8d40ddeab00..e6eac047c728 100644
--- a/arch/x86/include/asm/mmu_context.h
+++ b/arch/x86/include/asm/mmu_context.h
@@ -91,6 +91,29 @@ static inline void switch_ldt(struct mm_struct *prev, struct mm_struct *next)
  }
  #endif
+#ifdef CONFIG_X86_64
+static inline u64 mm_cr3_lam_mask(struct mm_struct *mm)
+{
+	return mm->context.lam_cr3_mask;
+}
+
+static inline void dup_lam(struct mm_struct *oldmm, struct mm_struct *mm)
+{
+	mm->context.lam_cr3_mask = oldmm->context.lam_cr3_mask;
+}
+
+#else
+
+static inline u64 mm_cr3_lam_mask(struct mm_struct *mm)
+{
+	return 0;
+}
+
+static inline void dup_lam(struct mm_struct *oldmm, struct mm_struct *mm)
+{
+}
+#endif

Do we really need the ifdeffery here? I see no real harm in having the field exist on 32-bit -- we don't care much about performance for 32-bit kernels.

-	if (real_prev == next) {
+	if (real_prev == next && prev_lam == new_lam) {
  		VM_WARN_ON(this_cpu_read(cpu_tlbstate.ctxs[prev_asid].ctx_id) !=
  			   next->context.ctx_id);

This looks wrong to me. If we change threads within the same mm but lam changes (which is certainly possible by a race if nothing else) then this will go down the "we really are changing mms" path, not the "we're not changing but we might need to flush something" path.




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux