On 11/11/2013 12:10 PM, David Daney wrote:
On 11/07/2013 09:08 AM, Markos Chandras wrote:
From: Leonid Yegoshin <Leonid.Yegoshin@xxxxxxxxxx>
The TLBINVF instruction can be used to flush the entire VTLB.
This eliminates the need for the TLBWI loop and improves performance.
Reviewed-by: Paul Burton <paul.burton@xxxxxxxxxx>
Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@xxxxxxxxxx>
Signed-off-by: Markos Chandras <markos.chandras@xxxxxxxxxx>
This should be split into two patches. One for each file.
Also...
---
arch/mips/include/asm/mipsregs.h | 13 +++++++++++++
arch/mips/mm/tlb-r4k.c | 18 ++++++++++++------
2 files changed, 25 insertions(+), 6 deletions(-)
diff --git a/arch/mips/include/asm/mipsregs.h
b/arch/mips/include/asm/mipsregs.h
index 412fe99..9cd0e13 100644
--- a/arch/mips/include/asm/mipsregs.h
+++ b/arch/mips/include/asm/mipsregs.h
@@ -685,6 +685,19 @@ static inline int mm_insn_16bit(u16 insn)
}
/*
+ * TLB Invalidate Flush
+ */
+static inline void tlbinvf(void)
+{
+ __asm__ __volatile__(
+ ".set push\n\t"
+ ".set noreorder\n\t"
... Why do you need noreorder here?
Historically. Just copied a worked stuff right before this function and
doesn't bother "why it is needed in other functions".
+ ".word 0x42000004\n\t" /* tlbinvf */
+ ".set pop");
+}
+
+
+/*
* Functions to access the R10000 performance counters. These are
basically
* mfc0 and mtc0 instructions from and to coprocessor register with
a 5-bit
* performance counter number encoded into bits 1 ... 5 of the
instruction.
diff --git a/arch/mips/mm/tlb-r4k.c b/arch/mips/mm/tlb-r4k.c
index 363aa03..427dcac 100644
--- a/arch/mips/mm/tlb-r4k.c
+++ b/arch/mips/mm/tlb-r4k.c
@@ -83,13 +83,19 @@ void local_flush_tlb_all(void)
entry = read_c0_wired();
/* Blast 'em all away. */
- while (entry < current_cpu_data.tlbsize) {
- /* Make sure all entries differ. */
- write_c0_entryhi(UNIQUE_ENTRYHI(entry));
- write_c0_index(entry);
+ if (cpu_has_tlbinv && current_cpu_data.tlbsize) {
+ write_c0_index(0);
mtc0_tlbw_hazard();
- tlb_write_indexed();
- entry++;
+ tlbinvf(); /* invalidate VTLB */
+ } else {
+ while (entry < current_cpu_data.tlbsize) {
+ /* Make sure all entries differ. */
+ write_c0_entryhi(UNIQUE_ENTRYHI(entry));
+ write_c0_index(entry);
+ mtc0_tlbw_hazard();
+ tlb_write_indexed();
+ entry++;
+ }
}
tlbw_use_hazard();
write_c0_entryhi(old_ctx);