Il 19/06/2013 11:09, Xiao Guangrong ha scritto: > Document it to Documentation/virtual/kvm/mmu.txt While reviewing the docs, I looked at the code. Why can't this happen? CPU 1: __get_spte_lockless CPU 2: __update_clear_spte_slow ------------------------------------------------------------------------------ write low read count read low read high write high check low and count update count The check passes, but CPU 1 read a "torn" SPTE. It seems like this is the same reason why seqlocks do two version updates, one before and one after, and make the reader check "version & ~1". But maybe I'm wrong. Paolo > Signed-off-by: Xiao Guangrong <xiaoguangrong@xxxxxxxxxxxxxxxxxx> > --- > Documentation/virtual/kvm/mmu.txt | 4 ++++ > arch/x86/include/asm/kvm_host.h | 5 +++++ > arch/x86/kvm/mmu.c | 7 ++++--- > 3 files changed, 13 insertions(+), 3 deletions(-) > > diff --git a/Documentation/virtual/kvm/mmu.txt b/Documentation/virtual/kvm/mmu.txt > index 869abcc..ce6df51 100644 > --- a/Documentation/virtual/kvm/mmu.txt > +++ b/Documentation/virtual/kvm/mmu.txt > @@ -210,6 +210,10 @@ Shadow pages contain the following information: > A bitmap indicating which sptes in spt point (directly or indirectly) at > pages that may be unsynchronized. Used to quickly locate all unsychronized > pages reachable from a given page. > + clear_spte_count: > + It is only used on 32bit host which helps us to detect whether updating the > + 64bit spte is complete so that we can avoid reading the truncated value out > + of mmu-lock. > > Reverse map > =========== > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h > index 966f265..1dac2c1 100644 > --- a/arch/x86/include/asm/kvm_host.h > +++ b/arch/x86/include/asm/kvm_host.h > @@ -226,6 +226,11 @@ struct kvm_mmu_page { > DECLARE_BITMAP(unsync_child_bitmap, 512); > > #ifdef CONFIG_X86_32 > + /* > + * Count after the page's spte has been cleared to avoid > + * the truncated value is read out of mmu-lock. > + * please see the comments in __get_spte_lockless(). > + */ > int clear_spte_count; > #endif > > diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c > index c87b19d..77d516c 100644 > --- a/arch/x86/kvm/mmu.c > +++ b/arch/x86/kvm/mmu.c > @@ -464,9 +464,10 @@ static u64 __update_clear_spte_slow(u64 *sptep, u64 spte) > /* > * The idea using the light way get the spte on x86_32 guest is from > * gup_get_pte(arch/x86/mm/gup.c). > - * The difference is we can not catch the spte tlb flush if we leave > - * guest mode, so we emulate it by increase clear_spte_count when spte > - * is cleared. > + * The difference is we can not immediately catch the spte tlb since > + * kvm may collapse tlb flush some times. Please see kvm_set_pte_rmapp. > + * > + * We emulate it by increase clear_spte_count when spte is cleared. > */ > static u64 __get_spte_lockless(u64 *sptep) > { > -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html