On 10/27/2010 07:41 PM, Gleb Natapov wrote: > On Wed, Oct 27, 2010 at 05:09:41PM +0800, Xiao Guangrong wrote: >> The async_pf number is very few since only pending interrupt can >> let it re-enter to the guest mode. >> >> During my test(Host 4 CPU + 4G, Guest 4 VCPU + 6G), it's no >> more than 10 requests in the system. >> >> So, we can only increase the completion counter in the work queue >> context, and walk vcpu->async_pf.queue list to get all completed >> async_pf >> > That depends on the load. I used memory cgroups to create very big > memory pressure and I saw hundreds of apfs per second. We shouldn't > optimize for very low numbers. With vcpu->async_pf.queue having more > then one element I am not sure your patch is beneficial. > Maybe we need a new no-lock way to record the complete apfs, i'll reproduce your test environment and improve it. >> + >> + list_del(&work->queue); >> + vcpu->async_pf.queued--; >> + kmem_cache_free(async_pf_cache, work); >> + if (atomic_dec_and_test(&vcpu->async_pf.done)) >> + break; > You should do atomic_dec() and always break. We cannot inject two apfs during > one vcpu entry. > Sorry, i'm little confused. Why 'atomic_dec_and_test(&vcpu->async_pf.done)' always break? async_pf.done is used to record the complete apfs and many apfs may be completed when vcpu enters guest mode(it means vcpu->async_pf.done > 1) Look at the current code: void kvm_check_async_pf_completion(struct kvm_vcpu *vcpu) { ...... spin_lock(&vcpu->async_pf.lock); work = list_first_entry(&vcpu->async_pf.done, typeof(*work), link); list_del(&work->link); spin_unlock(&vcpu->async_pf.lock); ...... } You only handle one complete apf, why we inject them at once? I missed something? :-( -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html