From: Paul Burton <paul.burton@xxxxxxxxxx> [ Upstream commit f39878cc5b09c75d35eaf52131e920b872e3feb4 ] In systems where there are multiple actors updating the TLB, the potential exists for a race condition wherein a CPU hits a TLB exception but by the time it reaches a TLBP instruction the affected TLB entry may have been replaced. This can happen if, for example, a CPU shares the TLB between hardware threads (VPs) within a core and one of them replaces the entry that another has just taken a TLB exception for. We handle this race in the case of the Hardware Table Walker (HTW) being the other actor already, but didn't take into account the potential for multiple threads racing. Include the code for aborting TLB exception handling in affected multi-threaded systems, those being the I6400 & I6500 CPUs which share TLB entries between VPs. In the case of using RiXi without dedicated exceptions we have never handled this race even for HTW. This patch adds WARN()s to these cases which ought never to be hit because all CPUs with either HTW or shared FTLB RAMs also include dedicated RiXi exceptions, but the WARN()s will ensure this is always the case. Signed-off-by: Paul Burton <paul.burton@xxxxxxxxxx> Cc: linux-mips@xxxxxxxxxxxxxx Patchwork: https://patchwork.linux-mips.org/patch/16203/ Signed-off-by: Ralf Baechle <ralf@xxxxxxxxxxxxxx> Signed-off-by: Sasha Levin <alexander.levin@xxxxxxxxxxxxx> --- arch/mips/mm/tlbex.c | 38 +++++++++++++++++++++++++++++++++++++- 1 file changed, 37 insertions(+), 1 deletion(-) diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c index 63b7d6f82d24..b639e12867ef 100644 --- a/arch/mips/mm/tlbex.c +++ b/arch/mips/mm/tlbex.c @@ -1876,6 +1876,26 @@ static void build_r3000_tlb_modify_handler(void) } #endif /* CONFIG_MIPS_PGD_C0_CONTEXT */ +static bool cpu_has_tlbex_tlbp_race(void) +{ + /* + * When a Hardware Table Walker is running it can replace TLB entries + * at any time, leading to a race between it & the CPU. + */ + if (cpu_has_htw) + return true; + + /* + * If the CPU shares FTLB RAM with its siblings then our entry may be + * replaced at any time by a sibling performing a write to the FTLB. + */ + if (cpu_has_shared_ftlb_ram) + return true; + + /* In all other cases there ought to be no race condition to handle */ + return false; +} + /* * R4000 style TLB load/store/modify handlers. */ @@ -1912,7 +1932,7 @@ build_r4000_tlbchange_handler_head(u32 **p, struct uasm_label **l, iPTE_LW(p, wr.r1, wr.r2); /* get even pte */ if (!m4kc_tlbp_war()) { build_tlb_probe_entry(p); - if (cpu_has_htw) { + if (cpu_has_tlbex_tlbp_race()) { /* race condition happens, leaving */ uasm_i_ehb(p); uasm_i_mfc0(p, wr.r3, C0_INDEX); @@ -1986,6 +2006,14 @@ static void build_r4000_tlb_load_handler(void) } uasm_i_nop(&p); + /* + * Warn if something may race with us & replace the TLB entry + * before we read it here. Everything with such races should + * also have dedicated RiXi exception handlers, so this + * shouldn't be hit. + */ + WARN(cpu_has_tlbex_tlbp_race(), "Unhandled race in RiXi path"); + uasm_i_tlbr(&p); switch (current_cpu_type()) { @@ -2053,6 +2081,14 @@ static void build_r4000_tlb_load_handler(void) } uasm_i_nop(&p); + /* + * Warn if something may race with us & replace the TLB entry + * before we read it here. Everything with such races should + * also have dedicated RiXi exception handlers, so this + * shouldn't be hit. + */ + WARN(cpu_has_tlbex_tlbp_race(), "Unhandled race in RiXi path"); + uasm_i_tlbr(&p); switch (current_cpu_type()) { -- 2.15.1