On 24/06/14 18:31, Deng-Cheng Zhu wrote: > From: Deng-Cheng Zhu <dengcheng.zhu@xxxxxxxxxx> > > At TLB initialization, the commpage TLB entry is reserved on top of the > existing WIRED entries (the number not necessarily be 0). > > Signed-off-by: Deng-Cheng Zhu <dengcheng.zhu@xxxxxxxxxx> > --- > arch/mips/kvm/mips.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c > index 27250ee..3d53d34 100644 > --- a/arch/mips/kvm/mips.c > +++ b/arch/mips/kvm/mips.c > @@ -170,7 +170,7 @@ void kvm_arch_sync_events(struct kvm *kvm) > static void kvm_mips_uninit_tlbs(void *arg) > { > /* Restore wired count */ > - write_c0_wired(0); > + write_c0_wired(read_c0_wired() - 1); > mtc0_tlbw_hazard(); > /* Clear out all the TLBs */ > kvm_local_flush_tlb_all(); kvm_local_flush_tlb_all blasts all the entries away regardless of wired, so I don't think this is an improvement. I suspect to really be safe/correct in the presence of other dynamic users of wired it would have to either manage arbitrary allocation/deallocation of per-cpu tlb entries correctly from a single place, or abandon the use of wired altogether. Cheers James