Em Thu, 6 Feb 2020 22:57:39 +0100 Paolo Bonzini <pbonzini@xxxxxxxxxx> escreveu: > On 06/02/20 17:11, Cornelia Huck wrote: > > On Thu, 6 Feb 2020 15:50:18 +0100 > > Mauro Carvalho Chehab <mchehab+huawei@xxxxxxxxxx> wrote: > > > >> - Use document title and chapter markups; > >> - Add markups for literal blocks; > >> - use :field: for field descriptions; > >> - Add blank lines and adjust indentation. > >> > >> Signed-off-by: Mauro Carvalho Chehab <mchehab+huawei@xxxxxxxxxx> > >> --- > >> Documentation/virt/kvm/index.rst | 1 + > >> .../virt/kvm/{locking.txt => locking.rst} | 111 ++++++++++-------- > >> 2 files changed, 63 insertions(+), 49 deletions(-) > >> rename Documentation/virt/kvm/{locking.txt => locking.rst} (78%) > > > > (...) > > > >> @@ -48,19 +52,23 @@ restore the saved R/X bits if VMX_EPT_TRACK_ACCESS mask is set, or both. This > >> is safe because whenever changing these bits can be detected by cmpxchg. > >> > >> But we need carefully check these cases: > >> -1): The mapping from gfn to pfn > >> + > >> +1) The mapping from gfn to pfn > >> + > >> The mapping from gfn to pfn may be changed since we can only ensure the pfn > >> is not changed during cmpxchg. This is a ABA problem, for example, below case > >> will happen: > >> > >> -At the beginning: > >> -gpte = gfn1 > >> -gfn1 is mapped to pfn1 on host > >> -spte is the shadow page table entry corresponding with gpte and > >> -spte = pfn1 > >> +At the beginning:: > >> > >> - VCPU 0 VCPU0 > >> -on fast page fault path: > >> + gpte = gfn1 > >> + gfn1 is mapped to pfn1 on host > >> + spte is the shadow page table entry corresponding with gpte and > >> + spte = pfn1 > >> + > >> + VCPU 0 VCPU0 > >> + > >> +on fast page fault path:: > >> > >> old_spte = *spte; > >> pfn1 is swapped out: > > > > I'm wondering if that should rather be converted to a proper table. Converting into a table would require to escape the asterisk, e. g.: old_spte = \*spte; or: ``old_spte = *spte;`` (and, for consistency, for all other stuff there) So, yeah, the html/pdf output could be nicer, but it would be worse for those who read the .rst file. > > Would be nicer but this is acceptable too I think. Especially, the > monospaced font allows breaking the table and keeping the parts aligned. FYI, there's a way to setup monospaced font on a table by using a template. As I pointed a while ago, it would require to add something to the Sphinx theme: diff --git a/Documentation/sphinx-static/theme_overrides.css b/Documentation/sphinx-static/theme_overrides.css index e21e36cd6761..0948de6651f8 100644 --- a/Documentation/sphinx-static/theme_overrides.css +++ b/Documentation/sphinx-static/theme_overrides.css @@ -125,3 +125,7 @@ div[class^="highlight"] pre { color: inherit; } } + +table.monospaced { + font-family: monospace, monospace; +} Then use, at locking.rst (or anywhere when a monospaced font is desired): +.. cssclass:: monospaced + ==== ====== ===== + Some Random Table + ==== ====== ===== For completeness, a similar template class should also be needed at latex_prefix setting at Documentation/conf.py, implementing the same feature for LaTeX and PDF output. IMHO, this would be too complex for not much gain. > > But the two headers should be "CPU 0" and "CPU 1", and it's worth > changing that while we're at it. Same for the table below. Ok. Maybe we could append the enclosed patch to this one. I suspect it would visually be better. Cheers, Mauro --- diff --git a/Documentation/virt/kvm/locking.rst b/Documentation/virt/kvm/locking.rst index 428cb3412ecc..82322a4c9fde 100644 --- a/Documentation/virt/kvm/locking.rst +++ b/Documentation/virt/kvm/locking.rst @@ -57,18 +57,18 @@ But we need carefully check these cases: The mapping from gfn to pfn may be changed since we can only ensure the pfn is not changed during cmpxchg. This is a ABA problem, for example, below case -will happen: +will happen:: -At the beginning:: + At the beginning: gpte = gfn1 gfn1 is mapped to pfn1 on host spte is the shadow page table entry corresponding with gpte and spte = pfn1 - VCPU 0 VCPU0 + On fast page fault path: -on fast page fault path:: + CPU 0 CPU 1 old_spte = *spte; pfn1 is swapped out: @@ -80,6 +80,7 @@ on fast page fault path:: gfn2 by the guest: spte = pfn1; + if (cmpxchg(spte, old_spte, old_spte+W) mark_page_dirty(vcpu->kvm, gfn1) OOPS!!! @@ -107,16 +108,16 @@ spte is read-only and the Accessed bit has already been set since the Accessed bit and Dirty bit can not be lost. But it is not true after fast page fault since the spte can be marked -writable between reading spte and updating spte. Like below case: +writable between reading spte and updating spte. Like below case:: -At the beginning:: + At the beginning: spte.W = 0 spte.Accessed = 1 - VCPU 0 VCPU0 + CPU 0 CPU 1 -In mmu_spte_clear_track_bits():: + In mmu_spte_clear_track_bits(): old_spte = *spte;