On Thu, 6 Feb 2020 15:50:18 +0100 Mauro Carvalho Chehab <mchehab+huawei@xxxxxxxxxx> wrote: > - Use document title and chapter markups; > - Add markups for literal blocks; > - use :field: for field descriptions; > - Add blank lines and adjust indentation. > > Signed-off-by: Mauro Carvalho Chehab <mchehab+huawei@xxxxxxxxxx> > --- > Documentation/virt/kvm/index.rst | 1 + > .../virt/kvm/{locking.txt => locking.rst} | 111 ++++++++++-------- > 2 files changed, 63 insertions(+), 49 deletions(-) > rename Documentation/virt/kvm/{locking.txt => locking.rst} (78%) (...) > @@ -48,19 +52,23 @@ restore the saved R/X bits if VMX_EPT_TRACK_ACCESS mask is set, or both. This > is safe because whenever changing these bits can be detected by cmpxchg. > > But we need carefully check these cases: > -1): The mapping from gfn to pfn > + > +1) The mapping from gfn to pfn > + > The mapping from gfn to pfn may be changed since we can only ensure the pfn > is not changed during cmpxchg. This is a ABA problem, for example, below case > will happen: > > -At the beginning: > -gpte = gfn1 > -gfn1 is mapped to pfn1 on host > -spte is the shadow page table entry corresponding with gpte and > -spte = pfn1 > +At the beginning:: > > - VCPU 0 VCPU0 > -on fast page fault path: > + gpte = gfn1 > + gfn1 is mapped to pfn1 on host > + spte is the shadow page table entry corresponding with gpte and > + spte = pfn1 > + > + VCPU 0 VCPU0 > + > +on fast page fault path:: > > old_spte = *spte; > pfn1 is swapped out: I'm wondering if that should rather be converted to a proper table. (...) > @@ -99,12 +109,14 @@ Accessed bit and Dirty bit can not be lost. > But it is not true after fast page fault since the spte can be marked > writable between reading spte and updating spte. Like below case: > > -At the beginning: > -spte.W = 0 > -spte.Accessed = 1 > +At the beginning:: > > - VCPU 0 VCPU0 > -In mmu_spte_clear_track_bits(): > + spte.W = 0 > + spte.Accessed = 1 > + > + VCPU 0 VCPU0 > + > +In mmu_spte_clear_track_bits():: > > old_spte = *spte; > This one as well.