Re: [PATCH 2/6] KVM MMU: fix kvm_mmu_zap_page() and its calling path

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Avi Kivity wrote:

> 
>>           kvm->arch.n_free_mmu_pages = 0;
>> @@ -1589,7 +1589,8 @@ static void mmu_unshadow(struct kvm *kvm, gfn_t
>> gfn)
>>           &&  !sp->role.invalid) {
>>               pgprintk("%s: zap %lx %x\n",
>>                    __func__, gfn, sp->role.word);
>> -            kvm_mmu_zap_page(kvm, sp);
>> +            if (kvm_mmu_zap_page(kvm, sp))
>> +                nn = bucket->first;
>>           }
>>       }
>>    
> 
> I don't understand why this is needed.

There is the code segment in mmu_unshadow():

|hlist_for_each_entry_safe(sp, node, nn, bucket, hash_link) {
|		if (sp->gfn == gfn && !sp->role.direct
|		    && !sp->role.invalid) {
|			pgprintk("%s: zap %lx %x\n",
|				 __func__, gfn, sp->role.word);
|			kvm_mmu_zap_page(kvm, sp);
|		}
|	}

in the loop, if nn is zapped, hlist_for_each_entry_safe() access nn will
cause crash. and it's checked in other functions, such as kvm_mmu_zap_all(),
kvm_mmu_unprotect_page()...

Thanks,
Xiao

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux