On Wed, 24 Feb 2016, Huacai Chen wrote: > If a tlb miss triggered when EXL=1, tlb refill exception is treated as > tlb invalid exception, so tlbp may fails. In this situation, CP0_Index > register doesn't contain a valid value. This may not be a problem for > VTLB since it is fully-associative. However, FTLB is set-associative so > not every tlb entry is valid for a specific address. Thus, we should > use tlbwr instead of tlbwi when tlbp fails. Can you please explain exactly why this change is needed? You're changing pretty generic code which has worked since forever. So why is a change suddenly needed? Our kernel entry/exit code has been written such that no mapped memory is accessed with EXL=1 so no TLB exception is expected to ever happen in these circumstances. So what's your scenario you mean to address? Your patch description does not explain it. > @@ -1913,7 +1935,14 @@ build_r4000_tlbchange_handler_tail(u32 **p, struct uasm_label **l, > uasm_i_ori(p, ptr, ptr, sizeof(pte_t)); > uasm_i_xori(p, ptr, ptr, sizeof(pte_t)); > build_update_entries(p, tmp, ptr); > + uasm_i_mfc0(p, ptr, C0_INDEX); > + uasm_il_bltz(p, r, ptr, label_tail_miss); > + uasm_i_nop(p); > build_tlb_write_entry(p, l, r, tlb_indexed); > + uasm_il_b(p, r, label_leave); > + uasm_i_nop(p); > + uasm_l_tail_miss(l, *p); > + build_tlb_write_entry(p, l, r, tlb_random); > uasm_l_leave(l, *p); > build_restore_work_registers(p); > uasm_i_eret(p); /* return from trap */ Specifically you're causing a performance hit here, which is a fast path, for everyone. If you have a scenario that needs this change, then please make it conditional on the circumstances and keep the handler unchanged in all the other cases. Maciej