On 31/07/21 00:37, David Matlack wrote:
- if (new_spte == iter->old_spte)
+ if (new_spte == iter->old_spte) {
ret = RET_PF_SPURIOUS;
- else if (!tdp_mmu_set_spte_atomic(vcpu->kvm, iter, new_spte))
- return RET_PF_RETRY;
+ } else {
+ if (!tdp_mmu_set_spte_atomic_no_dirty_log(vcpu->kvm, iter, new_spte))
+ return RET_PF_RETRY;
+
+ /*
+ * Mark the gfn dirty here rather that through the vcpu-agnostic
+ * handle_changed_spte_dirty_log to leverage vcpu->lru_slot_index.
+ */
+ if (is_writable_pte(new_spte))
+ kvm_vcpu_mark_page_dirty(vcpu, iter->gfn);
+ }
Looking at the remaining callers of tdp_mmu_set_spte_atomic we have:
* tdp_mmu_zap_spte_atomic calls it with REMOVED_SPTE as the new_spte,
which is never writable
* kvm_tdp_mmu_map calls it for nonleaf SPTEs, which are always writable
but should not be dirty.
So I think you should:
* change those two to tdp_mmu_set_spte_atomic_no_dirty_log
* add a WARN_ON_ONCE(iter->level > PG_LEVEL_4K) to tdp_mmu_set_spte_atomic
* put the kvm_vcpu_mark_page_dirty code directly in
tdp_mmu_set_spte_atomic, instead of the call to
handle_changed_spte_dirty_log
(I can't exclude I'm missing something though).
Paolo