On Mon, Aug 12, 2024, Lai Jiangshan wrote: > On Sat, Aug 10, 2024 at 3:49 AM Sean Christopherson <seanjc@xxxxxxxxxx> wrote: > > > + > > +static unsigned long kvm_rmap_lock(struct kvm_rmap_head *rmap_head) > > +{ > > + unsigned long old_val, new_val; > > + > > + old_val = READ_ONCE(rmap_head->val); > > + if (!old_val) > > + return 0; > > + > > + do { > > + /* > > + * If the rmap is locked, wait for it to be unlocked before > > + * trying acquire the lock, e.g. to bounce the cache line. > > + */ > > + while (old_val & KVM_RMAP_LOCKED) { > > + old_val = READ_ONCE(rmap_head->val); > > + cpu_relax(); > > The sequence of these two lines of code can be improved. Oh yeah, duh, re-read after PAUSE, not before. Definitely holler if you have any alternative ideas for walking rmaps without taking mmu_lock, I guarantee you've spent more time than me thinking about the shadow MMU :-)