Would someone like to take a look at this one? On Sun, Sep 16, 2018 at 04:07:16PM +0800, Wei Yang wrote: >Here is the code flow related to mmu_pages_first(): > > mmu_unsync_walk() > mmu_pages_add() > __mmu_unsync_walk() > for_each_sp() > mmu_pages_first() > >Every time when mmu_pages_first() is invoked, pvec is prepared by >mmu_unsync_walk() which insert at least one sp in pvec. > >This patch removes the check on pvec->nr since this doesn't happen. > >Signed-off-by: Wei Yang <richard.weiyang@xxxxxxxxx> >--- > arch/x86/kvm/mmu.c | 3 --- > 1 file changed, 3 deletions(-) > >diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c >index 899c029cff0d..0caaaa25e88b 100644 >--- a/arch/x86/kvm/mmu.c >+++ b/arch/x86/kvm/mmu.c >@@ -2267,9 +2267,6 @@ static int mmu_pages_first(struct kvm_mmu_pages *pvec, > struct kvm_mmu_page *sp; > int level; > >- if (pvec->nr == 0) >- return 0; >- > WARN_ON(pvec->page[0].idx != INVALID_INDEX); > > sp = pvec->page[0].sp; >-- >2.15.1 -- Wei Yang Help you, Help me