Hi Clément, On 2025-01-17 10:05 AM, Clément Léger wrote: > On 11/01/2025 00:55, Samuel Holland wrote: >> On 2025-01-06 9:48 AM, Clément Léger wrote: >>> SBI_FWFT_MISALIGNED_DELEG needs hedeleg to be modified to delegate >>> misaligned load/store exceptions. Save and restore it during CPU >>> load/put. >>> >>> Signed-off-by: Clément Léger <cleger@xxxxxxxxxxxx> >>> --- >>> arch/riscv/kvm/vcpu.c | 3 +++ >>> arch/riscv/kvm/vcpu_sbi_fwft.c | 39 ++++++++++++++++++++++++++++++++++ >>> 2 files changed, 42 insertions(+) >>> >>> diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c >>> index 3420a4a62c94..bb6f788d46f5 100644 >>> --- a/arch/riscv/kvm/vcpu.c >>> +++ b/arch/riscv/kvm/vcpu.c >>> @@ -641,6 +641,7 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) >>> { >>> void *nsh; >>> struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr; >>> + struct kvm_vcpu_config *cfg = &vcpu->arch.cfg; >>> >>> vcpu->cpu = -1; >>> >>> @@ -666,6 +667,7 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) >>> csr->vstval = nacl_csr_read(nsh, CSR_VSTVAL); >>> csr->hvip = nacl_csr_read(nsh, CSR_HVIP); >>> csr->vsatp = nacl_csr_read(nsh, CSR_VSATP); >>> + cfg->hedeleg = nacl_csr_read(nsh, CSR_HEDELEG); >>> } else { >>> csr->vsstatus = csr_read(CSR_VSSTATUS); >>> csr->vsie = csr_read(CSR_VSIE); >>> @@ -676,6 +678,7 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) >>> csr->vstval = csr_read(CSR_VSTVAL); >>> csr->hvip = csr_read(CSR_HVIP); >>> csr->vsatp = csr_read(CSR_VSATP); >>> + cfg->hedeleg = csr_read(CSR_HEDELEG); >>> } >>> } >>> >>> diff --git a/arch/riscv/kvm/vcpu_sbi_fwft.c b/arch/riscv/kvm/vcpu_sbi_fwft.c >>> index 55433e805baa..1e85ff6666af 100644 >>> --- a/arch/riscv/kvm/vcpu_sbi_fwft.c >>> +++ b/arch/riscv/kvm/vcpu_sbi_fwft.c >>> @@ -14,6 +14,8 @@ >>> #include <asm/kvm_vcpu_sbi.h> >>> #include <asm/kvm_vcpu_sbi_fwft.h> >>> >>> +#define MIS_DELEG (1UL << EXC_LOAD_MISALIGNED | 1UL << EXC_STORE_MISALIGNED) >>> + >>> static const enum sbi_fwft_feature_t kvm_fwft_defined_features[] = { >>> SBI_FWFT_MISALIGNED_EXC_DELEG, >>> SBI_FWFT_LANDING_PAD, >>> @@ -35,7 +37,44 @@ static bool kvm_fwft_is_defined_feature(enum sbi_fwft_feature_t feature) >>> return false; >>> } >>> >>> +static bool kvm_sbi_fwft_misaligned_delegation_supported(struct kvm_vcpu *vcpu) >>> +{ >>> + if (!unaligned_ctl_available()) >> >> This seems like the wrong condition. Patch 2 requests delegation regardless of >> what probing detects. For MISALIGNED_SCALAR_FAST, the delegation likely doesn't> change any actual behavior, because the hardware likely never raises the >> exception. But it does ensure M-mode never emulates anything, so if the >> exception were to occur, the kernel has the choice whether to handle it. And >> this lets us provide the same guarantee to KVM guests. So I think this feature >> should also be supported if we successfully delegated the exception on the host >> side. > > Not sure to completely follow you here but patch 2 actually does the > reverse of what you said. We request delegation from SBI *before* > probing so that allows probing to see if we (kernel) receives misaligned > accesses traps and thus set MISALIGNED_SCALAR_EMULATED. Ah, right, that makes sense. > But if I understood correctly, you mean that guest delegation should > also be available to guest in case misaligned access were delegated by > the SBI to the host which I agree. I think this condition should be Yes, your understanding is correct. > reworked to report the delegation status itself and not the misaligned > access speed that was detected Agreed, with the nuance that delegation may have been enabled by a FWFT call, or it may have been pre-existing, as detected by MISALIGNED_SCALAR_EMULATED or MISALIGNED_VECTOR_EMULATED. For example, I would consider a platform where the hardware supports misaligned scalar accesses (MISALIGNED_SCALAR_FAST) but not vector accesses, and M-mode doesn't emulate them (MISALIGNED_VECTOR_EMULATED) to be delegated for the purposes of this check, even if M-mode doesn't implement the FWFT extension. Regards, Samuel >>> + return false; >>> + >>> + return true; >>> +} >>> + >>> +static int kvm_sbi_fwft_set_misaligned_delegation(struct kvm_vcpu *vcpu, >>> + struct kvm_sbi_fwft_config *conf, >>> + unsigned long value) >>> +{ >>> + if (value == 1) >>> + csr_set(CSR_HEDELEG, MIS_DELEG); >>> + else if (value == 0) >>> + csr_clear(CSR_HEDELEG, MIS_DELEG); >>> + else >>> + return SBI_ERR_INVALID_PARAM; >>> + >>> + return SBI_SUCCESS; >>> +} >>> + >>> +static int kvm_sbi_fwft_get_misaligned_delegation(struct kvm_vcpu *vcpu, >>> + struct kvm_sbi_fwft_config *conf, >>> + unsigned long *value) >>> +{ >>> + *value = (csr_read(CSR_HEDELEG) & MIS_DELEG) != 0; >>> + >>> + return SBI_SUCCESS; >>> +} >>> + >>> static const struct kvm_sbi_fwft_feature features[] = { >>> + { >>> + .id = SBI_FWFT_MISALIGNED_EXC_DELEG, >>> + .supported = kvm_sbi_fwft_misaligned_delegation_supported, >>> + .set = kvm_sbi_fwft_set_misaligned_delegation, >>> + .get = kvm_sbi_fwft_get_misaligned_delegation, >>> + } >> >> nit: Please add a trailing comma here as future patches will extend the array. >> >> Regards, >> Samuel >> >>> }; >>> >>> static struct kvm_sbi_fwft_config * >> >