On Wed, Oct 02, 2019 at 06:18:06PM +0200, Paolo Bonzini wrote: > On 02/10/19 16:15, Jerome Glisse wrote: > >>> Why would you need to target mmu notifier on target vma ? > >> If the mapping of the source VMA changes, mirroring can update the > >> target VMA via insert_pfn. But what ensures that KVM's MMU notifier > >> dismantles its own existing page tables (so that they can be recreated > >> with the new mapping from the source VMA)? > >> > > So just to make sure i follow we have: > > - qemu process on host with anonymous vma > > -> host cpu page table > > - kvm which maps host anonymous vma to guest > > -> kvm guest page table > > - kvm inspector process which mirror vma from qemu process > > -> inspector process page table > > > > AFAIK the KVM notifier's will clear the kvm guest page table whenever > > necessary (through kvm_mmu_notifier_invalidate_range_start). This is > > what ensure that KVM's dismatles its own mapping, it abides to mmu- > > notifier callbacks. If you did not you would have bugs (at least i > > expect so). Am i wrong here ? > > The KVM inspector process is also (or can be) a QEMU that will have to > create its own KVM guest page table. Ok missed that part, thank you for explaining > > So if a page in the source VMA is unmapped we want: > > - the source KVM to invalidate its guest page table (done by the KVM MMU > notifier) > > - the target VMA to be invalidated (easy using mirroring) > > - the target KVM to invalidate its guest page table, as a result of > invalidation of the target VMA You can do the target KVM invalidation inside the mirroring invalidation code. Cheers, Jérôme