On Mon, Jul 18, 2022, Chao Peng wrote: > On Fri, Jul 15, 2022 at 01:36:15PM +0200, Gupta, Pankaj wrote: > > > Currently in mmu_notifier validate path, hva range is recorded and then > > > checked in the mmu_notifier_retry_hva() from page fault path. However > > > for the to be introduced private memory, a page fault may not have a hva > > > > As this patch appeared in v7, just wondering did you see an actual bug > > because of it? And not having corresponding 'hva' occurs only with private > > memory because its not mapped to host userspace? > > The addressed problem is not new in this version, previous versions I > also had code to handle it (just in different way). But the problem is: > mmu_notifier/memfile_notifier may be in the progress of invalidating a > pfn that obtained earlier in the page fault handler, when happens, we > should retry the fault. In v6 I used global mmu_notifier_retry() for > memfile_notifier but that can block unrelated mmu_notifer invalidation > which has hva range specified. > > Sean gave a comment at https://lkml.org/lkml/2022/6/17/1001 to separate > memfile_notifier from mmu_notifier but during the implementation I > realized we actually can reuse the same code for shared and private > memory if both using gpa range and that can simplify the code handling > in kvm_zap_gfn_range and some other code (e.g. we don't need two > versions for memfile_notifier/mmu_notifier). This should work, though I'm undecided as to whether or not it's a good idea. KVM allows aliasing multiple gfns to a single hva, and so using the gfn could result in a much larger range being rejected given the simplistic algorithm for handling multiple ranges in kvm_inc_notifier_count(). But I assume such aliasing is uncommon, so I'm not sure it's worth optimizing for. > Adding gpa range for private memory invalidation also relieves the > above blocking issue between private memory page fault and mmu_notifier.