Hi Marc, On Mon, Mar 27, 2023 at 3:15 AM Marc Zyngier <maz@xxxxxxxxxx> wrote: > > On Fri, 17 Mar 2023 05:06:33 +0000, > Jing Zhang <jingzhangos@xxxxxxxxxx> wrote: > > > > From: Reiji Watanabe <reijiw@xxxxxxxxxx> > > > > Introduce id_regs[] in kvm_arch as a storage of guest's ID registers, > > and save ID registers' sanitized value in the array at KVM_CREATE_VM. > > Use the saved ones when ID registers are read by the guest or > > userspace (via KVM_GET_ONE_REG). > > > > No functional change intended. > > > > Signed-off-by: Reiji Watanabe <reijiw@xxxxxxxxxx> > > Co-developed-by: Jing Zhang <jingzhangos@xxxxxxxxxx> > > Signed-off-by: Jing Zhang <jingzhangos@xxxxxxxxxx> > > --- > > arch/arm64/include/asm/kvm_host.h | 11 ++++++++ > > arch/arm64/kvm/arm.c | 1 + > > arch/arm64/kvm/id_regs.c | 44 ++++++++++++++++++++++++------- > > arch/arm64/kvm/sys_regs.c | 2 +- > > arch/arm64/kvm/sys_regs.h | 1 + > > 5 files changed, 49 insertions(+), 10 deletions(-) > > > > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h > > index a1892a8f6032..fb6b50b1f111 100644 > > --- a/arch/arm64/include/asm/kvm_host.h > > +++ b/arch/arm64/include/asm/kvm_host.h > > @@ -245,6 +245,15 @@ struct kvm_arch { > > * the associated pKVM instance in the hypervisor. > > */ > > struct kvm_protected_vm pkvm; > > + > > + /* > > + * Save ID registers for the guest in id_regs[]. > > + * (Op0, Op1, CRn, CRm, Op2) of the ID registers to be saved in it > > + * is (3, 0, 0, crm, op2), where 1<=crm<8, 0<=op2<8. > > + */ > > +#define KVM_ARM_ID_REG_NUM 56 > > +#define IDREG_IDX(id) (((sys_reg_CRm(id) - 1) << 3) | sys_reg_Op2(id)) > > + u64 id_regs[KVM_ARM_ID_REG_NUM]; > > Place these registers in their own structure, and place this structure > *before* the pvm structure. Document what guards these registers when > updated (my hunch is that this should rely on Oliver's locking fixes > if the update comes from a vcpu). Sure, I will put them in their own structure and place it before the pkvm structure. IIUC, usually we don't need a specific locking to update idregs here. All idregs are 64 bit and can be read/written atomically. The only case that may need a locking is to keep the consistency for PMUVer in AA64DFR0_EL1 and PerfMon in DFR0_EL1. If there is no use case for two VCPU threads in a VM to update PMUVer and PerfMon concurrently, then we don't need the locking as in later patch by using the kvm lock. WDTY? > > > }; > > > > struct kvm_vcpu_fault_info { > > @@ -1005,6 +1014,8 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu, > > long kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm, > > struct kvm_arm_copy_mte_tags *copy_tags); > > > > +void kvm_arm_set_default_id_regs(struct kvm *kvm); > > + > > /* Guest/host FPSIMD coordination helpers */ > > int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu); > > void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu); > > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c > > index 3bd732eaf087..4579c878ab30 100644 > > --- a/arch/arm64/kvm/arm.c > > +++ b/arch/arm64/kvm/arm.c > > @@ -153,6 +153,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) > > > > set_default_spectre(kvm); > > kvm_arm_init_hypercalls(kvm); > > + kvm_arm_set_default_id_regs(kvm); > > > > /* > > * Initialise the default PMUver before there is a chance to > > diff --git a/arch/arm64/kvm/id_regs.c b/arch/arm64/kvm/id_regs.c > > index 08b738852955..e393b5730557 100644 > > --- a/arch/arm64/kvm/id_regs.c > > +++ b/arch/arm64/kvm/id_regs.c > > @@ -52,16 +52,9 @@ static u8 pmuver_to_perfmon(u8 pmuver) > > } > > } > > > > -/* Read a sanitised cpufeature ID register by sys_reg_desc */ > > -static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc const *r) > > +u64 kvm_arm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id) > > { > > - u32 id = reg_to_encoding(r); > > - u64 val; > > - > > - if (sysreg_visible_as_raz(vcpu, r)) > > - return 0; > > - > > - val = read_sanitised_ftr_reg(id); > > + u64 val = vcpu->kvm->arch.id_regs[IDREG_IDX(id)]; > > > > switch (id) { > > case SYS_ID_AA64PFR0_EL1: > > @@ -126,6 +119,14 @@ static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc const *r > > return val; > > } > > > > +static u64 read_id_reg(const struct kvm_vcpu *vcpu, struct sys_reg_desc const *r) > > +{ > > + if (sysreg_visible_as_raz(vcpu, r)) > > + return 0; > > + > > + return kvm_arm_read_id_reg(vcpu, reg_to_encoding(r)); > > +} > > + > > /* cpufeature ID register access trap handlers */ > > > > static bool access_id_reg(struct kvm_vcpu *vcpu, > > @@ -504,3 +505,28 @@ int kvm_arm_walk_id_regs(struct kvm_vcpu *vcpu, u64 __user *uind) > > } > > return total; > > } > > + > > +/* > > + * Set the guest's ID registers that are defined in id_reg_descs[] > > + * with ID_SANITISED() to the host's sanitized value. > > + */ > > +void kvm_arm_set_default_id_regs(struct kvm *kvm) > > +{ > > + int i; > > + u32 id; > > + u64 val; > > + > > + for (i = 0; i < ARRAY_SIZE(id_reg_descs); i++) { > > + id = reg_to_encoding(&id_reg_descs[i]); > > + if (WARN_ON_ONCE(!is_id_reg(id))) > > + /* Shouldn't happen */ > > + continue; > > + > > + if (id_reg_descs[i].visibility == raz_visibility) > > + /* Hidden or reserved ID register */ > > + continue; > > Relying on function pointer comparison is really fragile. If I wrap > raz_visibility() in another function, this won't catch it. It also > doesn't bode well with your 'inline' definition of this function. > > More importantly, why do we care about checking for visibility at all? > We can happily populate the array and rely on the runtime visibility. Right. I'll remove this checking. > > > + > > + val = read_sanitised_ftr_reg(id); > > + kvm->arch.id_regs[IDREG_IDX(id)] = val; > > + } > > +} > > Thanks, > > M. > > -- > Without deviation from the norm, progress is not possible. Thanks, Jing