On Tue, Mar 30, 2021, Ben Gardon wrote: > On Thu, Mar 25, 2021 at 7:20 PM Sean Christopherson <seanjc@xxxxxxxxxx> wrote: > > Patch 10 moves x86's memslot walkers into common KVM. I chose x86 purely > > because I could actually test it. All architectures use nearly identical > > code, so I don't think it actually matters in the end. > > I'm still reviewing 10 and 14-18. 10 is a huge change and the diff is > pretty hard to parse. Ya :-/ I don't see an easy way to break it up without creating a massive diff, e.g. it could be staged in x86 and moved to common, but I don't think that would fundamentally change the diff. Although I admittedly didn't spend _that_ much time thinking about how to break it up. > > Patches 11-13 move arm64, MIPS, and PPC to the new APIs. > > > > Patch 14 yanks out the old APIs. > > > > Patch 15 adds the mmu_lock elision, but only for unpaired notifications. > > Reading through all this code and considering the changes I'm > preparing for the TDP MMU have me wondering if it might help to have a > more general purpose MMU lock context struct which could be embedded > in the structs added in this patch. I'm thinking something like: > enum kvm_mmu_lock_mode { > KVM_MMU_LOCK_NONE, > KVM_MMU_LOCK_READ, > KVM_MMU_LOCK_WRITE, > }; > > struct kvm_mmu_lock_context { > enum kvm_mmu_lock_mode lock_mode; > bool can_block; > bool can_yield; Not that it matters right now, but can_block and can_yield are the same thing. I considered s/can_yield/can_block to make it all consistent, but that felt like unnecessary thrash. > bool flush; Drat. This made me realize that the 'struct kvm_gfn_range' passed to arch code isn't tagged 'const'. I thought I had done that, but obviously not. Anyways, what I was going to say before that realization is that the downside to putting flush into kvm_gfn_range is that it would have to lose its 'const' qualifier. That's all a moot point if it's not easily constified though. Const aside, my gut reaction is that it will probably be cleaner to keep the flush stuff in arch code, separate from the kvm_gfn_range passed in by common KVM. Looping 'flush' back into the helpers is x86 specific at this point, and AFAICT that's not likely to change any time soon. For rwlock support, if we get to the point where kvm_age_gfn() and/or kvm_test_age_gfn() can take mmu_lock for read, then it definitely makes sense to track locking in kvm_gfn_range, assuming it isn't the sole entity that prevents consifying kvm_range_range. > }; > > This could yield some grossly long lines, but it would also have > potential to unify a bunch of ad-hoc handling. > The above struct could also fit into a single byte, so it'd be pretty > easy to pass it around.