On 9/18/2020 3:30 PM, Andrew Jones wrote: > On Thu, Sep 17, 2020 at 08:00:56PM +0800, Peng Liang wrote: >> To emulate ID registers, we need to validate the value of the register >> defined by user space. For most ID registers, we need to check whether >> each field defined by user space is no more than that of host (whether >> host support the corresponding features) and whether the fields are >> supposed to be exposed to guest. Introduce check_features to do those >> jobs. >> >> Signed-off-by: zhanghailiang <zhang.zhanghailiang@xxxxxxxxxx> >> Signed-off-by: Peng Liang <liangpeng10@xxxxxxxxxx> >> --- >> arch/arm64/include/asm/cpufeature.h | 2 ++ >> arch/arm64/kernel/cpufeature.c | 23 +++++++++++++++++++++++ >> 2 files changed, 25 insertions(+) >> >> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h >> index 2ba7c4f11d8a..954adc5ca72f 100644 >> --- a/arch/arm64/include/asm/cpufeature.h >> +++ b/arch/arm64/include/asm/cpufeature.h >> @@ -579,6 +579,8 @@ void check_local_cpu_capabilities(void); >> >> u64 read_sanitised_ftr_reg(u32 id); >> >> +int check_features(u32 sys_reg, u64 val); >> + >> static inline bool cpu_supports_mixed_endian_el0(void) >> { >> return id_aa64mmfr0_mixed_endian_el0(read_cpuid(ID_AA64MMFR0_EL1)); >> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c >> index 698b32705544..e58926992a70 100644 >> --- a/arch/arm64/kernel/cpufeature.c >> +++ b/arch/arm64/kernel/cpufeature.c >> @@ -2850,3 +2850,26 @@ ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr, >> >> return sprintf(buf, "Vulnerable\n"); >> } >> + >> +int check_features(u32 sys_reg, u64 val) >> +{ >> + struct arm64_ftr_reg *reg = get_arm64_ftr_reg(sys_reg); >> + const struct arm64_ftr_bits *ftrp; >> + u64 exposed_mask = 0; >> + >> + if (!reg) >> + return -ENOENT; >> + >> + for (ftrp = reg->ftr_bits; ftrp->width; ftrp++) { >> + if (arm64_ftr_value(ftrp, reg->sys_val) < >> + arm64_ftr_value(ftrp, val)) { >> + return -EINVAL; > > This assumes that 0b1111 is invalid if the host has e.g. 0b0001, > but, IIRC, there are some ID registers where 0b1111 means the > feature is disabled. I think arm64_ftr_value will handle it correctly. If the value of the field is 0b1111 and the field is signed, arm64_ftr_value will return -1. > >> + } >> + exposed_mask |= arm64_ftr_mask(ftrp); >> + } >> + >> + if (val & ~exposed_mask) >> + return -EINVAL; >> + >> + return 0; >> +} >> -- >> 2.26.2 >> > > I don't think we should be trying to do the verification at the ftr_bits > level, at least not generally. Trying to handle all ID registers the > same way is bound to fail, for the 0b1111 vs. 0b0000 reason pointed > out above, and probably other reasons. As I stated before, we should be > validating each feature of each ID register on a case by case basis, > and we should be using higher level CPU feature checking APIs to get > that right. > > Also, what about validating that all VCPUs have consistent features > exposed? Each VCPU could select a valid feature mask by this check, > but different ones, which will obviously create a completely broken > guest. > > Thanks, > drew > > . > Thank you for pointing this. I haven't thought about it yet... Thanks, Peng