Even if multiple ASIDs are not supported, using the single-ASID variant of the sfence.vma instruction preserves TLB entries for global (kernel) pages. So it is always more efficient to use the single-ASID code path. Reviewed-by: Alexandre Ghiti <alexghiti@xxxxxxxxxxxx> Signed-off-by: Samuel Holland <samuel.holland@xxxxxxxxxx> --- Changes in v5: - Leave use_asid_allocator declared in asm/mmu_context.h Changes in v4: - There is now only one copy of __flush_tlb_range() Changes in v2: - Update both copies of __flush_tlb_range() arch/riscv/mm/tlbflush.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c index e194e14e5b2b..5b473588a985 100644 --- a/arch/riscv/mm/tlbflush.c +++ b/arch/riscv/mm/tlbflush.c @@ -108,8 +108,7 @@ static void __flush_tlb_range(struct cpumask *cmask, unsigned long asid, static inline unsigned long get_mm_asid(struct mm_struct *mm) { - return static_branch_unlikely(&use_asid_allocator) ? - cntx2asid(atomic_long_read(&mm->context.id)) : FLUSH_TLB_NO_ASID; + return cntx2asid(atomic_long_read(&mm->context.id)); } void flush_tlb_mm(struct mm_struct *mm) -- 2.43.1