On Tue, Nov 03, 2020 at 11:32:08AM +0000, Dave Martin wrote: > On Mon, Nov 02, 2020 at 07:50:37PM +0100, Andrew Jones wrote: > > The AA64ZFR0_EL1 accessors are just the general accessors with > > its visibility function open-coded. It also skips the if-else > > chain in read_id_reg, but there's no reason not to go there. > > Indeed consolidating ID register accessors and removing lines > > of code make it worthwhile. > > > > No functional change intended. > > Nit: No statement of what the patch does. I can duplicate the summary in the commit message? > > > Signed-off-by: Andrew Jones <drjones@xxxxxxxxxx> > > --- > > arch/arm64/kvm/sys_regs.c | 61 +++++++-------------------------------- > > 1 file changed, 11 insertions(+), 50 deletions(-) > > > > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c > > index b8822a20b1ea..e2d6fb83280e 100644 > > --- a/arch/arm64/kvm/sys_regs.c > > +++ b/arch/arm64/kvm/sys_regs.c > > @@ -1156,6 +1156,16 @@ static u64 read_id_reg(const struct kvm_vcpu *vcpu, > > static unsigned int id_visibility(const struct kvm_vcpu *vcpu, > > const struct sys_reg_desc *r) > > { > > + u32 id = sys_reg((u32)r->Op0, (u32)r->Op1, > > + (u32)r->CRn, (u32)r->CRm, (u32)r->Op2); > > + > > + switch (id) { > > + case SYS_ID_AA64ZFR0_EL1: > > + if (!vcpu_has_sve(vcpu)) > > + return REG_RAZ; > > + break; > > + } > > + > > This should work, but I'm not sure it's preferable to giving affected > registers their own visibility check function. > > Multiplexing all the ID regs through this one checker function will > introduce a bit of overhead for always-non-RAZ ID regs, but I'd guess > the impact is negligible given the other overheads on these paths. Yes, my though was that a switch isn't going to generate much overhead and consolidating the ID registers cleans things up a bit. > > > return 0; > > } > > > > @@ -1203,55 +1213,6 @@ static unsigned int sve_visibility(const struct kvm_vcpu *vcpu, > > return REG_HIDDEN_USER | REG_HIDDEN_GUEST; > > } > > > > -/* Generate the emulated ID_AA64ZFR0_EL1 value exposed to the guest */ > > -static u64 guest_id_aa64zfr0_el1(const struct kvm_vcpu *vcpu) > > -{ > > - if (!vcpu_has_sve(vcpu)) > > - return 0; > > - > > - return read_sanitised_ftr_reg(SYS_ID_AA64ZFR0_EL1); > > -} > > - > > -static bool access_id_aa64zfr0_el1(struct kvm_vcpu *vcpu, > > - struct sys_reg_params *p, > > - const struct sys_reg_desc *rd) > > -{ > > - if (p->is_write) > > - return write_to_read_only(vcpu, p, rd); > > - > > - p->regval = guest_id_aa64zfr0_el1(vcpu); > > - return true; > > -} > > - > > -static int get_id_aa64zfr0_el1(struct kvm_vcpu *vcpu, > > - const struct sys_reg_desc *rd, > > - const struct kvm_one_reg *reg, void __user *uaddr) > > -{ > > - u64 val; > > - > > - val = guest_id_aa64zfr0_el1(vcpu); > > - return reg_to_user(uaddr, &val, reg->id); > > -} > > - > > -static int set_id_aa64zfr0_el1(struct kvm_vcpu *vcpu, > > - const struct sys_reg_desc *rd, > > - const struct kvm_one_reg *reg, void __user *uaddr) > > -{ > > - const u64 id = sys_reg_to_index(rd); > > - int err; > > - u64 val; > > - > > - err = reg_from_user(&val, uaddr, id); > > - if (err) > > - return err; > > - > > - /* This is what we mean by invariant: you can't change it. */ > > - if (val != guest_id_aa64zfr0_el1(vcpu)) > > - return -EINVAL; > > - > > - return 0; > > -} > > - > > /* > > * cpufeature ID register user accessors > > * > > @@ -1515,7 +1476,7 @@ static const struct sys_reg_desc sys_reg_descs[] = { > > ID_SANITISED(ID_AA64PFR1_EL1), > > ID_UNALLOCATED(4,2), > > ID_UNALLOCATED(4,3), > > - { SYS_DESC(SYS_ID_AA64ZFR0_EL1), access_id_aa64zfr0_el1, .get_user = get_id_aa64zfr0_el1, .set_user = set_id_aa64zfr0_el1, }, > > + ID_SANITISED(ID_AA64ZFR0_EL1), > > If keeping a dedicated helper, we could have a special macro for that, say > > ID_SANITISED_VISIBILITY(ID_AA64ZFR0_EL1, id_aa64zfr0_el1_visibility) I considered this first, but decided the switch, like read_id_reg's if-else chain, is probably not going to introduce much overhead. Thanks, drew _______________________________________________ kvmarm mailing list kvmarm@xxxxxxxxxxxxxxxxxxxxx https://lists.cs.columbia.edu/mailman/listinfo/kvmarm