On Wed, Dec 14, 2022, Lai Jiangshan wrote: > On Thu, Oct 13, 2022 at 1:00 AM Sean Christopherson <seanjc@xxxxxxxxxx> wrote: > > > +/* Flush the given page (huge or not) of guest memory. */ > > > +static inline void kvm_flush_remote_tlbs_gfn(struct kvm *kvm, gfn_t gfn, int level) > > > +{ > > > + u64 pages = KVM_PAGES_PER_HPAGE(level); > > > + > > > > Rather than require the caller to align gfn, what about doing gfn_round_for_level() > > in this helper? It's a little odd that the caller needs to align gfn but doesn't > > have to compute the size. > > > > I'm 99% certain kvm_set_pte_rmap() is the only path that doesn't already align the > > gfn, but it's nice to not have to worry about getting this right, e.g. alternatively > > this helper could WARN if the gfn is misaligned, but that's _more work. > > > > kvm_flush_remote_tlbs_with_address(kvm, gfn_round_for_level(gfn, level), > > KVM_PAGES_PER_HPAGE(level); > > > > If no one objects, this can be done when the series is applied, i.e. no need to > > send v5 just for this. > > > > Hello Paolo, Sean, Hou, > > It seems the patchset has not been queued. I believe it does > fix bugs. It's on my list of things to get merged for 6.3. I haven't been more agressive in getting it queued because I assume there are very few KVM-on-HyperV users that are likely to be affected.