On 04/29/2012 04:50 PM, Takuya Yoshikawa wrote: > On Fri, 27 Apr 2012 11:52:13 -0300 > Marcelo Tosatti <mtosatti@xxxxxxxxxx> wrote: > >> Yes but the objective you are aiming for is to read and write sptes >> without mmu_lock. That is, i am not talking about this patch. >> Please read carefully the two examples i gave (separated by "example)"). > > The real objective is not still clear. > > The ~10% improvement reported before was on macro benchmarks during live > migration. At least, that optimization was the initial objective. > > But at some point, the objective suddenly changed to "lock-less" without > understanding what introduced the original improvement. > > Was the problem really mmu_lock contention? > Takuya, i am so tired to argue the advantage of lockless write-protect and lockless O(1) dirty-log again and again. > If the path being introduced by this patch is really fast, isn't it > possible to achieve the same improvement still using mmu_lock? > > > Note: During live migration, the fact that the guest gets faulted is > itself a limitation. We could easily see noticeable slowdown of a > program even if it runs only between two GET_DIRTY_LOGs. > Obviously no. It depends on what the guest is doing, from my autotest test, it very easily to see that, the huge improvement is on bench-migration not pure-migration. > >> The rules for code under mmu_lock should be: >> >> 1) Spte updates under mmu lock must always be atomic and >> with locked instructions. >> 2) Spte values must be read once, and appropriate action >> must be taken when writing them back in case their value >> has changed (remote TLB flush might be required). > > Although I am not certain about what will be really needed in the > final form, if this kind of maybe-needed-overhead is going to be > added little by little, I worry about possible regression. Well, will you suggest Linus to reject all patches and stop all discussion for the "possible regression" reason? -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html