Hi James, After checking this patch, I think my patch[1] already include this logic(only a little difference). In my first version patch [2], It sets the virtual ESR in the KVM, but Marc and other people disagree that[3][4],and propose to set its value and injection by userspace(when RAS is enabled). If you think we also need to support injection by KVM, I can extend my patch to support that(but I think we should not support according previous review comments). So I think we no need to submit another patch, it will be duplicated, and waste our review time. thank you very much. I will combine that. [1] https://lkml.org/lkml/2017/8/28/497 [2] https://patchwork.kernel.org/patch/9633105/ [3]https://lkml.org/lkml/2017/3/20/441 [4]https://lkml.org/lkml/2017/3/20/516 On 2017/10/6 3:18, James Morse wrote: > Prior to v8.2's RAS Extensions, the HCR_EL2.VSE 'virtual SError' feature > generated an SError with an implementation defined ESR_EL1.ISS, because we > had no mechanism to specify the ESR value. > > On Juno this generates an all-zero ESR, the most significant bit 'ISV' > is clear indicating the remainder of the ISS field is invalid. > > With the RAS Extensions we have a mechanism to specify this value, and the > most significant bit has a new meaning: 'IDS - Implementation Defined > Syndrome'. An all-zero SError ESR now means: 'RAS error: Uncategorized' > instead of 'no valid ISS'. > > Add KVM support for the VSESR_EL2 register to specify an ESR value when > HCR_EL2.VSE generates a virtual SError. Change kvm_inject_vabt() to > specify an implementation-defined value. > > We only need to restore the VSESR_EL2 value when HCR_EL2.VSE is set, KVM > save/restores this bit during __deactivate_traps() and hardware clears the > bit once the guest has consumed the virtual-SError. > > Future patches may add an API (or KVM CAP) to pend a virtual SError with > a specified ESR. > > Cc: Dongjiu Geng <gengdongjiu@xxxxxxxxxx> > Signed-off-by: James Morse <james.morse@xxxxxxx> > --- > arch/arm64/include/asm/kvm_emulate.h | 5 +++++ > arch/arm64/include/asm/kvm_host.h | 3 +++ > arch/arm64/include/asm/sysreg.h | 1 + > arch/arm64/kvm/hyp/switch.c | 4 ++++ > arch/arm64/kvm/inject_fault.c | 13 ++++++++++++- > 5 files changed, 25 insertions(+), 1 deletion(-) > > diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h > index e5df3fce0008..8a7a838eb17a 100644 > --- a/arch/arm64/include/asm/kvm_emulate.h > +++ b/arch/arm64/include/asm/kvm_emulate.h > @@ -61,6 +61,11 @@ static inline void vcpu_set_hcr(struct kvm_vcpu *vcpu, unsigned long hcr) > vcpu->arch.hcr_el2 = hcr; > } > > +static inline void vcpu_set_vsesr(struct kvm_vcpu *vcpu, u64 vsesr) > +{ > + vcpu->arch.vsesr_el2 = vsesr; > +} > + > static inline unsigned long *vcpu_pc(const struct kvm_vcpu *vcpu) > { > return (unsigned long *)&vcpu_gp_regs(vcpu)->regs.pc; > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h > index d3eb79a9ed6b..0af35e71fedb 100644 > --- a/arch/arm64/include/asm/kvm_host.h > +++ b/arch/arm64/include/asm/kvm_host.h > @@ -277,6 +277,9 @@ struct kvm_vcpu_arch { > > /* Detect first run of a vcpu */ > bool has_run_once; > + > + /* Virtual SError ESR to restore when HCR_EL2.VSE is set */ > + u64 vsesr_el2; > }; > > #define vcpu_gp_regs(v) (&(v)->arch.ctxt.gp_regs) > diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h > index 427c36bc5dd6..a493e93de296 100644 > --- a/arch/arm64/include/asm/sysreg.h > +++ b/arch/arm64/include/asm/sysreg.h > @@ -253,6 +253,7 @@ > > #define SYS_DACR32_EL2 sys_reg(3, 4, 3, 0, 0) > #define SYS_IFSR32_EL2 sys_reg(3, 4, 5, 0, 1) > +#define SYS_VSESR_EL2 sys_reg(3, 4, 5, 2, 3) > #define SYS_FPEXC32_EL2 sys_reg(3, 4, 5, 3, 0) > > #define __SYS__AP0Rx_EL2(x) sys_reg(3, 4, 12, 8, x) > diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c > index 945e79c641c4..af37658223a0 100644 > --- a/arch/arm64/kvm/hyp/switch.c > +++ b/arch/arm64/kvm/hyp/switch.c > @@ -86,6 +86,10 @@ static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu) > isb(); > } > write_sysreg(val, hcr_el2); > + > + if (cpus_have_const_cap(ARM64_HAS_RAS_EXTN) && (val & HCR_VSE)) > + write_sysreg_s(vcpu->arch.vsesr_el2, SYS_VSESR_EL2); > + > /* Trap on AArch32 cp15 c15 accesses (EL1 or EL0) */ > write_sysreg(1 << 15, hstr_el2); > /* > diff --git a/arch/arm64/kvm/inject_fault.c b/arch/arm64/kvm/inject_fault.c > index da6a8cfa54a0..52f7f66f1356 100644 > --- a/arch/arm64/kvm/inject_fault.c > +++ b/arch/arm64/kvm/inject_fault.c > @@ -232,14 +232,25 @@ void kvm_inject_undefined(struct kvm_vcpu *vcpu) > inject_undef64(vcpu); > } > > +static void pend_guest_serror(struct kvm_vcpu *vcpu, u64 esr) > +{ > + vcpu_set_vsesr(vcpu, esr); > + vcpu_set_hcr(vcpu, vcpu_get_hcr(vcpu) | HCR_VSE); > +} > + > /** > * kvm_inject_vabt - inject an async abort / SError into the guest > * @vcpu: The VCPU to receive the exception > * > * It is assumed that this code is called from the VCPU thread and that the > * VCPU therefore is not currently executing guest code. > + * > + * Systems with the RAS Extensions specify an imp-def ESR (ISV/IDS = 1) with > + * the remaining ISS all-zeros so that this error is not interpreted as an > + * uncatagorized RAS error. Without the RAS Extensions we can't specify an ESR > + * value, so the CPU generates an imp-def value. > */ > void kvm_inject_vabt(struct kvm_vcpu *vcpu) > { > - vcpu_set_hcr(vcpu, vcpu_get_hcr(vcpu) | HCR_VSE); > + pend_guest_serror(vcpu, ESR_ELx_ISV); > } > _______________________________________________ kvmarm mailing list kvmarm@xxxxxxxxxxxxxxxxxxxxx https://lists.cs.columbia.edu/mailman/listinfo/kvmarm