On Thursday 15 Jul 2021 at 17:31:47 (+0100), Marc Zyngier wrote: > +struct s2_walk_data { > + kvm_pte_t pteval; > + u32 level; > +}; > + > +static int s2_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, > + enum kvm_pgtable_walk_flags flag, void * const arg) > +{ > + struct s2_walk_data *data = arg; > + > + data->level = level; > + data->pteval = *ptep; > + return 0; > +} > + > +/* Assumes mmu_lock taken */ > +static bool __check_ioguard_page(struct kvm_vcpu *vcpu, gpa_t ipa) > +{ > + struct s2_walk_data data; > + struct kvm_pgtable_walker walker = { > + .cb = s2_walker, > + .flags = KVM_PGTABLE_WALK_LEAF, > + .arg = &data, > + }; > + > + kvm_pgtable_walk(vcpu->arch.hw_mmu->pgt, ALIGN_DOWN(ipa, PAGE_SIZE), > + PAGE_SIZE, &walker); > + > + /* Must be a PAGE_SIZE mapping with our annotation */ > + return (BIT(ARM64_HW_PGTABLE_LEVEL_SHIFT(data.level)) == PAGE_SIZE && > + data.pteval == MMIO_NOTE); Nit: you could do this check in the walker directly and check the return value of kvm_pgtable_walk() instead. That would allow to get rid of struct s2_walk_data. Also, though the compiler might be able to optimize, maybe simplify the level check to level == (KVM_PGTABLE_MAX_LEVELS - 1)? Thanks, Quentin > +}