On 12/14/21 11:26, Yang Zhong wrote:
Paolo, Seems we do not need new KVM_EXIT_* again from below thomas' new patchset:
git://git.kernel.org/pub/scm/linux/kernel/git/people/tglx/devel.git x86/fpu-kvm
So the selftest stll need support KVM_GET_MSR/KVM_SET_MSR for MSR_IA32_XFD
and MSR_IA32_XFD_ERR? If yes, we only do some read/write test with vcpu_set_msr()/
vcpu_get_msr() from new selftest tool? or do wrmsr from guest side and check this value
from selftest side?
You can write a test similar to state_test.c to cover XCR0, XFD and the
new XSAVE extensions. The test can:
- initialize AMX and write a nonzero value to XFD
- load a matrix into TMM0
- check that #NM is delivered (search for vm_install_exception_handler) and
that XFD_ERR is correct
- write 0 to XFD
- load again the matrix, and check that #NM is not delivered
- store it back into memory
- compare it with the original data
All of this can be done with a full save&restore after every step
(though I suggest that you first get it working without save&restore,
the relevant code in state_test.c is easy to identify and comment out).
You will have to modify vcpu_load_state, so that it does
first KVM_SET_MSRS, then KVM_SET_XCRS, then KVM_SET_XSAVE.
See patch below.
Paolo
I checked some msr selftest reference code, tsc_msrs_test.c, which maybe better for this
reference. If you have better suggestion, please share it to me. thanks!
------------------ 8< -----------------
From: Paolo Bonzini <pbonzini@xxxxxxxxxx>
Subject: [PATCH] selftest: kvm: Reorder vcpu_load_state steps for AMX
For AMX support it is recommended to load XCR0 after XFD, so that
KVM does not see XFD=0, XCR=1 for a save state that will eventually
be disabled (which would lead to premature allocation of the space
required for that save state).
It is also required to load XSAVE data after XCR0 and XFD, so that
KVM can trigger allocation of the extra space required to store AMX
state.
Adjust vcpu_load_state to obey these new requirements.
Signed-off-by: Paolo Bonzini <pbonzini@xxxxxxxxxx>
diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
index 82c39db91369..d805f63f7203 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
@@ -1157,16 +1157,6 @@ void vcpu_load_state(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_x86_state *s
struct vcpu *vcpu = vcpu_find(vm, vcpuid);
int r;
- r = ioctl(vcpu->fd, KVM_SET_XSAVE, &state->xsave);
- TEST_ASSERT(r == 0, "Unexpected result from KVM_SET_XSAVE, r: %i",
- r);
-
- if (kvm_check_cap(KVM_CAP_XCRS)) {
- r = ioctl(vcpu->fd, KVM_SET_XCRS, &state->xcrs);
- TEST_ASSERT(r == 0, "Unexpected result from KVM_SET_XCRS, r: %i",
- r);
- }
-
r = ioctl(vcpu->fd, KVM_SET_SREGS, &state->sregs);
TEST_ASSERT(r == 0, "Unexpected result from KVM_SET_SREGS, r: %i",
r);
@@ -1175,6 +1165,16 @@ void vcpu_load_state(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_x86_state *s
TEST_ASSERT(r == state->msrs.nmsrs, "Unexpected result from KVM_SET_MSRS, r: %i (failed at %x)",
r, r == state->msrs.nmsrs ? -1 : state->msrs.entries[r].index);
+ if (kvm_check_cap(KVM_CAP_XCRS)) {
+ r = ioctl(vcpu->fd, KVM_SET_XCRS, &state->xcrs);
+ TEST_ASSERT(r == 0, "Unexpected result from KVM_SET_XCRS, r: %i",
+ r);
+ }
+
+ r = ioctl(vcpu->fd, KVM_SET_XSAVE, &state->xsave);
+ TEST_ASSERT(r == 0, "Unexpected result from KVM_SET_XSAVE, r: %i",
+ r);
+
r = ioctl(vcpu->fd, KVM_SET_VCPU_EVENTS, &state->events);
TEST_ASSERT(r == 0, "Unexpected result from KVM_SET_VCPU_EVENTS, r: %i",
r);