On 05/29/2013 09:32 PM, Marcelo Tosatti wrote: > On Wed, May 29, 2013 at 09:09:09PM +0800, Xiao Guangrong wrote: >> This information is I replied Gleb in his mail where he raced a question that >> why "collapse tlb flush is needed": >> >> ====== >> It seems no. >> Since we have reloaded mmu before zapping the obsolete pages, the mmu-lock >> is easily contended. I did the simple track: >> >> + int num = 0; >> restart: >> list_for_each_entry_safe_reverse(sp, node, >> &kvm->arch.active_mmu_pages, link) { >> @@ -4265,6 +4265,7 @@ restart: >> if (batch >= BATCH_ZAP_PAGES && >> cond_resched_lock(&kvm->mmu_lock)) { >> batch = 0; >> + num++; >> goto restart; >> } >> >> @@ -4277,6 +4278,7 @@ restart: >> * may use the pages. >> */ >> kvm_mmu_commit_zap_page(kvm, &invalid_list); >> + printk("lock-break: %d.\n", num); >> } >> >> I do read pci rom when doing kernel building in the guest which >> has 1G memory and 4vcpus with ept enabled, this is the normal >> workload and normal configuration. >> >> # dmesg >> [ 2338.759099] lock-break: 8. >> [ 2339.732442] lock-break: 5. >> [ 2340.904446] lock-break: 3. >> [ 2342.513514] lock-break: 3. >> [ 2343.452229] lock-break: 3. >> [ 2344.981599] lock-break: 4. >> >> Basically, we need to break many times. > > Should measure kvm_mmu_zap_all latency. > >> ====== >> >> You can see we should break 3 times to zap all pages even if we have zapoed >> 10 pages in batch. It is obviously that it need break more times without >> batch-zapping. > > Again, breaking should be no problem, what matters is latency. Please > measure kvm_mmu_zap_all latency after all optimizations to justify > this minimum batching. Okay, okay. I will benchmark the latency. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html