On Wed, Apr 27, 2022, Sean Christopherson wrote: > Finally, the refresh logic doesn't protect against concurrent refreshes > with different GPAs (which may or may not be a desired use case, but its > allowed in the code), nor does it protect against a false negative on the > memslot generation. If the first refresh sees a stale memslot generation, > it will refresh the hva and generation before moving on to the hva=>pfn > translation. If it then drops gpc->lock, a different user can come along, > acquire gpc->lock, see that the memslot generation is fresh, and skip > the hva=>pfn update due to the userspace address also matching (because > it too was updated). Address this race by adding an "in-progress" flag > so that the refresh that acquires gpc->lock first runs to completion > before other users can start their refresh. ... > @@ -159,10 +249,23 @@ int kvm_gfn_to_pfn_cache_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, > > write_lock_irq(&gpc->lock); > > + /* > + * If another task is refreshing the cache, wait for it to complete. > + * There is no guarantee that concurrent refreshes will see the same > + * gpa, memslots generation, etc..., so they must be fully serialized. > + */ > + while (gpc->refresh_in_progress) { > + write_unlock_irq(&gpc->lock); > + > + cond_resched(); > + > + write_lock_irq(&gpc->lock); > + } > + gpc->refresh_in_progress = true; Adding refresh_in_progress can likely go in a separate patch. I'll plan on doing that in a v3 unless it proves to be painful. > @@ -246,9 +296,26 @@ int kvm_gfn_to_pfn_cache_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, > } > > out: > + /* > + * Invalidate the cache and purge the pfn/khva if the refresh failed. > + * Some/all of the uhva, gpa, and memslot generation info may still be > + * valid, leave it as is. > + */ > + if (ret) { > + gpc->valid = false; > + gpc->pfn = KVM_PFN_ERR_FAULT; > + gpc->khva = NULL; > + } > + > + gpc->refresh_in_progress = false;