On 04/12/2010 11:53 AM, Xiao Guangrong wrote:
kvm->arch.n_free_mmu_pages = 0;
@@ -1589,7 +1589,8 @@ static void mmu_unshadow(struct kvm *kvm, gfn_t
gfn)
&& !sp->role.invalid) {
pgprintk("%s: zap %lx %x\n",
__func__, gfn, sp->role.word);
- kvm_mmu_zap_page(kvm, sp);
+ if (kvm_mmu_zap_page(kvm, sp))
+ nn = bucket->first;
}
}
I don't understand why this is needed.
There is the code segment in mmu_unshadow():
|hlist_for_each_entry_safe(sp, node, nn, bucket, hash_link) {
| if (sp->gfn == gfn&& !sp->role.direct
| && !sp->role.invalid) {
| pgprintk("%s: zap %lx %x\n",
| __func__, gfn, sp->role.word);
| kvm_mmu_zap_page(kvm, sp);
| }
| }
in the loop, if nn is zapped, hlist_for_each_entry_safe() access nn will
cause crash. and it's checked in other functions, such as kvm_mmu_zap_all(),
kvm_mmu_unprotect_page()...
hlist_for_each_entry_safe() is supposed to be be safe against removal of
the element that is pointed to by the iteration cursor.
--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html