On 3/31/22 13:16, Kai Huang wrote:
+ if (range && kvm_available_flush_tlb_with_range()) {
+ /* Callback should flush both private GFN and shared GFN. */
+ range->start_gfn = kvm_gfn_unalias(kvm, range->start_gfn);
This seems wrong. It seems the intention of this function is to flush TLB for
all aliases for a given GFN range. Here it seems you are unconditionally change
to range to always exclude the stolen bits.
He passes the "low" range with bits cleared, and expects the callback to
take care of both. That seems okay (apart from the incorrect
fallthrough that you pointed out).
- gfn = gpte_to_gfn(gpte);
+ gfn = gpte_to_gfn(vcpu, gpte);
pte_access = sp->role.access;
pte_access &= FNAME(gpte_access)(gpte);
FNAME(protect_clean_gpte)(vcpu->arch.mmu, &pte_access, gpte);
In commit message you mentioned "Don't support stolen bits for shadow EPT" (you
actually mean shadow MMU I suppose), yet there's bunch of code change to shadow
MMU.
It's a bit ugly, but it's uglier to keep two versions of gpte_to_gfn.
Perhaps the commit message can be rephrased to "Stolen bits are not
supported in the shadow MMU; they will be used only for TDX which uses
the TDP MMU exclusively as it does not support nested virtualization.
Therefore, the gfn_shared_mask will always be zero in that case".
Paolo