On Sat, Aug 07, 2021, Paolo Bonzini wrote: > @@ -377,9 +377,9 @@ TRACE_EVENT( > ), > > TP_fast_assign( > - __entry->gfn = addr >> PAGE_SHIFT; > - __entry->pfn = pfn | (__entry->gfn & (KVM_PAGES_PER_HPAGE(level) - 1)); > - __entry->level = level; > + __entry->gfn = fault->addr >> PAGE_SHIFT; Eww. The existing code also bastardizes addr vs. gpa, but this just looks even more wrong because we have fault->gfn. Maybe do this as a prep patch at the beginning of the series? And then use fault->gfn directly. diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 7d03e9b7ccfa..b159749300b5 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -725,7 +725,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gpa_t addr, level = kvm_mmu_hugepage_adjust(vcpu, gw->gfn, max_level, &pfn, huge_page_disallowed, &req_level); - trace_kvm_mmu_spte_requested(addr, gw->level, pfn); + trace_kvm_mmu_spte_requested(gw->gfn << PAGE_SHIFT, gw->level, pfn); for (; shadow_walk_okay(&it); shadow_walk_next(&it)) { clear_sp_write_flooding_count(it.sptep); > + __entry->pfn = fault->pfn | (__entry->gfn & (KVM_PAGES_PER_HPAGE(fault->goal_level) - 1)); Similar thing here, it could use fault->gfn directly. > + __entry->level = fault->goal_level; > ), > > TP_printk("gfn %llx pfn %llx level %d",